Skip to content

Rebase to v2.50.0 #763

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 976 commits into
base: vfs-2.50.0
Choose a base branch
from
Open

Rebase to v2.50.0 #763

wants to merge 976 commits into from

Conversation

dscho
Copy link
Member

@dscho dscho commented Jun 6, 2025

This rebases microsoft/git's branch thicket on top of Git for Windows v2.50.0. Please find the range-diff below.

@dscho dscho self-assigned this Jun 6, 2025
@dscho
Copy link
Member Author

dscho commented Jun 9, 2025

Scalar Functional Tests / Scalar Functional Tests (macos-13, ignored) (pull_request)Failing after 1m

This is the error:

 /Users/runner/work/_actions/actions/setup-dotnet/v4/externals/install-dotnet.sh --skip-non-versioned-files --runtime dotnet --channel LTS
dotnet-install: .NET Core Runtime with version '8.0.16' is already installed.
/Users/runner/work/_actions/actions/setup-dotnet/v4/externals/install-dotnet.sh --skip-non-versioned-files --channel 3.1
/Users/runner/work/_actions/actions/setup-dotnet/v4/externals/install-dotnet.sh: line 1440: link_types[$link_index]: unbound variable
Error: Failed to install dotnet, exit code: 1. /Users/runner/work/_actions/actions/setup-dotnet/v4/externals/install-dotnet.sh: line 1440: link_types[$link_index]: unbound variable

And it looks like this is a known issue (without resolution as of time of writing).

@dscho dscho force-pushed the tentative/vfs-2.50.0 branch 5 times, most recently from 400e9a8 to fc2fed8 Compare June 16, 2025 19:24
Kevin Willford and others added 23 commits June 16, 2025 21:30
While using the reset --stdin feature on windows path added may have a
\r at the end of the path that wasn't getting removed so didn't match
the path in the index and wasn't reset.

Signed-off-by: Kevin Willford <kewillf@microsoft.com>
It has been a long-standing practice in Git for Windows to append
`.windows.<n>`, and in microsoft/git to append `.vfs.0.0`. Let's keep
doing that.

Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Since we really want to be based on a `.vfs.*` tag, let's make sure that
there was a new-enough one, i.e. one that agrees with the first three
version numbers of the recorded default version.

This prevents e.g. v2.22.0.vfs.0.<some-huge-number>.<commit> from being
used when the current release train was not yet tagged.

It is important to get the first three numbers of the version right
because e.g. Scalar makes decisions depending on those (such as assuming
that the `git maintenance` built-in is not available, even though it
actually _is_ available).

Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
With this commit, we gather statistics about the sizes of commits,
trees, and blobs in the repository, and then present them in the form
of "hexbins", i.e. log(16) histograms that show how many objects fall
into the 0..15 bytes range, the 16..255 range, the 256..4095 range, etc.

For commits, we also show the total count grouped by the number of
parents, and for trees we additionally show the total count grouped by
number of entries in the form of "qbins", i.e. log(4) histograms.

Signed-off-by: Jeff Hostetler <jeffhostetler@github.com>
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
This header file will accumulate GVFS-specific definitions.

Signed-off-by: Kevin Willford <kewillf@microsoft.com>
Create `struct large_item` and `struct large_item_vec` to capture the
n largest commits, trees, and blobs under various scaling dimensions,
such as size in bytes, number of commit parents, or number of entries
in a tree.

Each of these have a command line option to set them independently.

Signed-off-by: Jeff Hostetler <jeffhostetler@github.com>
This does not do anything yet. The next patches will add various values
for that config setting that correspond to the various features
offered/required by GVFS.

Signed-off-by: Kevin Willford <kewillf@microsoft.com>

gvfs: refactor loading the core.gvfs config value

This code change makes sure that the config value for core_gvfs
is always loaded before checking it.

Signed-off-by: Kevin Willford <kewillf@microsoft.com>
Include the pathname of each blob or tree in the large_item_vec
to help identify the file or directory associated with the OID
and size information.

This pathname is computed during the path walk, so it reflects the
first observed pathname seen for that OID during the traversal over
all of the refs.  Since the file or directory could have moved
(without being modified), there may be multiple "correct" pathnames
for a particular OID.  Since we do not control the ref traversal
order, we should consider it to be a "suggested pathname" for the OID.

Signed-off-by: Jeff Hostetler <jeffhostetler@github.com>
This takes a substantial amount of time, and if the user is reasonably
sure that the files' integrity is not compromised, that time can be saved.

Git no longer verifies the SHA-1 by default, anyway.

Signed-off-by: Kevin Willford <kewillf@microsoft.com>

Update for 2023-02-27: This feature was upstreamed as the index.skipHash
config option. This resulted in some changes to the struct and some of
the setup code. In particular, the config reading was moved to
prepare_repo_settings(), so the core.gvfs bit check was moved there,
too.

Signed-off-by: Kevin Willford <kewillf@microsoft.com>
Signed-off-by: Derrick Stolee <derrickstolee@github.com>
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Jeff Hostetler <jeffhostetler@github.com>
Signed-off-by: Kevin Willford <kewillf@microsoft.com>
Signed-off-by: Jeff Hostetler <jeffhostetler@github.com>
Prevent the sparse checkout to delete files that were marked with
skip-worktree bit and are not in the sparse-checkout file.

This is because everything with the skip-worktree bit turned on is being
virtualized and will be removed with the change of HEAD.

There was only one failing test when running with these changes that was
checking to make sure the worktree narrows on checkout which was
expected since we would no longer be narrowing the worktree.

Update 2022-04-05: temporarily set 'sparse.expectfilesoutsideofpatterns' in
test (until we start disabling the "remove present-despite-SKIP_WORKTREE"
behavior with 'core.virtualfilesystem' in a later commit).

Signed-off-by: Kevin Willford <kewillf@microsoft.com>
Computing `git name-rev` on each commit, tree, and blob in each
of the various large_item_vec can be very expensive if there are
too many refs, especially if the user doesn't need the result.
Lets make it optional.

The `--no-name-rev` option can save 50 calls to `git name-rev`
since we have 5 large_item_vec's and each defaults to 10 items.

Signed-off-by: Jeff Hostetler <jeffhostetler@github.com>
This adds hard-coded call to GVFS.hooks.exe before and after each Git
command runs.

To make sure that this is only called on repositories cloned with GVFS, we
test for the tell-tale .gvfs.

2021-10-30: Recent movement of find_hook() to hook.c required moving these
changes out of run-command.c to hook.c.

Signed-off-by: Ben Peart <Ben.Peart@microsoft.com>
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
While performing a fetch with a virtual file system we know that there
will be missing objects and we don't want to download them just because
of the reachability of the commits.  We also don't want to download a
pack file with commits, trees, and blobs since these will be downloaded
on demand.

This flag will skip the first connectivity check and by returning zero
will skip the upload pack. It will also skip the second connectivity
check but continue to update the branches to the latest commit ids.

Signed-off-by: Kevin Willford <kewillf@microsoft.com>
This backports the `ds/advice-sparse-index-expansion` patches into
`microsoft/git` which _just_ missed the v2.46.0 window.

Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Suggested by Ben Peart.

Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Ensure all filters and EOL conversions are blocked when running under
GVFS so that our projected file sizes will match the actual file size
when it is hydrated on the local machine.

Signed-off-by: Ben Peart <Ben.Peart@microsoft.com>
Signed-off-by: Jeff Hostetler <jeffhostetler@github.com>
Verify that the core.hooksPath configuration is repsected by the
pre-command hook. Original regression test was written by
Alejandro Pauly.

Signed-off-by: Matthew John Cheetham <mjcheetham@outlook.com>
dscho and others added 24 commits June 16, 2025 21:30
This pull request aims to correct a pretty big issue when dealing with
UNINTERESTING objects in the path-walk API. They somehow were only
exposed when trying to perform a push from a shallow clone.

This will require rewriting the upstream version so this is avoided from
the start, but we can do a forward fix for now.

The key issue is that the path-walk API was not walking UNINTERESTING
trees at the right time, and the way it was being done was more
complicated than it needed to be. This changes some of the way the
path-walk API works in the presence of UNINTERSTING commits, but these
are good changes to make.

I had briefly attempted to remove the use of the `edge_aggressive`
option in `struct path_walk_info` in favor of using the
`--objects-edge-aggressive` option in the revision struct. When I
started down that road, though, I somehow got myself into a bind of
things not working correctly. I backed out to this version that is
working with our test cases.

I tested this using the thin and big pack tests in `p5313` which had the
same performance as before this change.

The new change is that in a shallow clone we can get the same `git push`
improvements.

I was hung up on testing this for a long time as I wasn't getting the
same results in my shallow clone as in my regular clones. It turns out
that I had forgotten to use `--no-reuse-delta` in my test command, so it
was picking the deltas that were given by the initial clone instead of
picking new ones per the algorithm. 🤦🏻
Introduce a new maintenance task, `cache-local-objects`, that operates
on Scalar or VFS for Git repositories with a per-volume, shared object
cache (specified by `gvfs.sharedCache`) to migrate packfiles and loose
objects from the repository object directory to the shared cache.

Older versions of `microsoft/git` incorrectly placed packfiles in the
repository object directory instead of the shared cache; this task will
help clean up existing clones impacted by that issue.

Migration of packfiles involves the following steps for each pack:

1. Hardlink (or copy):
   a. the .pack file
   b. the .keep file
   c. the .rev file
2. Move (or copy + delete) the .idx file
3. Delete/unlink:
   a. the .pack file
   b. the .keep file
   c. the .rev file

Moving the index file after the others ensures the pack is not read
from the new cache directory until all associated files (rev, keep)
exist in the cache directory also.

Moving loose objects operates as a move, or copy + delete.

Signed-off-by: Matthew John Cheetham <mjcheetham@outlook.com>
The --path-walk option in `git pack-objects` is implied by the
pack.usePathWalk=true config value. This is intended to help the
packfile generation within `git push` specifically.

While this config does enable the path-walk feature, it does not lead to
the expected levels of compression in the cases it was designed to
handle. This is due to the default implication of the --reuse-delta
option as well as auto-GC.

In the performance tests used to evaluate the --path-walk option, such
as those in p5313, the --no-reuse-delta option is used to ensure that
deltas are recomputed according to the new object walk. However, it was
assumed (I assumed this) that when the objects were loose from
client-side operations that better deltas would be computed during this
operation. This wasn't confirmed because the test process used data that
was fetched from real repositories and thus existed in packed form only.

I was able to confirm that this does not reproduce when the objects to
push are loose. Careful use of making the pushed commit unreachable and
loosening the objects via `git repack -Ad` helps to confirm my
suspicions here. Independent of this change, I'm pushing for these
pipeline agents to set `gc.auto=0` before creating their Git objects. In
the current setup, the repo is adding objects and then incrementally
repacking them and ending up with bad cross-path deltas. This approach
can help scenarios where that makes sense, but will not cover all of our
users without them choosing to opt-in to background maintenance (and
even then, an incremental repack could cost them efficiency).

In order to make sure we are getting the intended compression in `git
push`, this change enforces the spawned `git pack-objects` process to
use `--no-reuse-delta`.

As far as I can tell, the main motivation for implying the --reuse-delta
option by default is two-fold:

 1. The code in send-pack.c that executes 'git pack-objects' is ignorant
    of whether the current process is a client pushing to a remote or a
    remote sending a fetch or clone to a client.

 2. For servers, it is critical that they trust the previously computed
    deltas whenever possible, or they could overload their CPU
    resources.

There's also the side that most servers use repacking logic that will
replace any bad deltas that are sent by clients (or at least, that's the
hope; we've seen that repacks can also pick bad deltas).

This commit also adds a test case that demonstrates that `git -c
pack.usePathWalk=true push` now avoids reusing deltas.

To do this, the test case constructs a pack with a horrendously
inefficient delta object, then verifies that the pack on the receiving
side of the `push` fails to have such an inefficient delta.

The test case would probably be a lot more readable if hex numbers were
used instead of octal numbers, but alas, `printf "\x<hex>"` is not
portable, only `printf "\<octal>"` is. For example, dash's built-in
`printf` function simply prints `\x` verbatim while bash's built-in
happily converts this construct to the corresponding byte.

Signed-off-by: Derrick Stolee <stolee@gmail.com>
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Git v2.48.0 has become even more stringent about leaks.

Signed-off-by: Johannes Schindelin <Johannes.Schindelin@gmx.de>
Add the `cache-local-objects` maintenance task to the list of tasks run
by the `scalar run` command. It's often easier for users to run the
shorter `scalar run` command than the equivalent `git maintenance`
command.

Signed-off-by: Matthew John Cheetham <mjcheetham@outlook.com>
The --path-walk option in 'git pack-objects' is implied by the
pack.usePathWalk=true config value. This is intended to help the
packfile generation within 'git push' specifically.

While this config does enable the path-walk feature, it does not lead
the expected levels of compression in the cases it was designed to
handle. This is due to the default implication of the --reuse-delta
option as well as auto-GC.

In the performance tests used to evaluate the --path-walk option, such
as those in p5313, the --no-reuse-delta option is used to ensure that
deltas are recomputed according to the new object walk. However, it was
assumed (I assumed this) that when the objects were loose from
client-side operations that better deltas would be computed during this
operation. This wasn't confirmed because the test process used data that
was fetched from real repositories and thus existed in packed form only.

I was able to confirm that this does not reproduce when the objects to
push are loose. Careful use of making the pushed commit unreachable and
loosening the objects via 'git repack -Ad' helps to confirm my
suspicions here. Independent of this change, I'm pushing for these
pipeline agents to set 'gc.auto=0' before creating their Git objects. In
the current setup, the repo is adding objects and then incrementally
repacking them and ending up with bad cross-path deltas. This approach
can help scenarios where that makes sense, but will not cover all of our
users without them choosing to opt-in to background maintenance (and
even then, an incremental repack could cost them efficiency).

In order to make sure we are getting the intended compression in 'git
push', this change makes the --path-walk option imply --no-reuse-delta
when the --reuse-delta option is not provided.

As far as I can tell, the main motivation for implying the --reuse-delta
option by default is two-fold:

1. The code in send-pack.c that executes 'git pack-objects' is ignorant
of whether the current process is a client pushing to a remote or a
remote sending a fetch or clone to a client.

2. For servers, it is critical that they trust the previously computed
deltas whenever possible, or they could overload their CPU resources.

There's also the side that most servers use repacking logic that will
replace any bad deltas that are sent by clients (or at least, that's the
hope; we've seen that repacks can also pick bad deltas).

The --path-walk option at the moment is not compatible with reachability
bitmaps, so is not planned to be used by Git servers. Thus, we can
reasonably assume (for now) that the --path-walk option is assuming a
client-side scenario, either a push or a repack. The repack option will
be explicit about the --reuse-delta option or not.

One thing to be careful about is background maintenance, which uses a
list of objects instead of refs, so we condition this on the case where
the --path-walk option will be effective by checking that the --revs
option was provided.

Alternative options considered included:

* Adding _another_ config ('pack.reuseDelta=false') to opt-in to this
choice. However, we already have pack.usePathWalk=true as an opt-in to
"do the right thing to make my data small" as far as our internal users
are concerned.

* Modify the chain between builtin/push.c, transport.c, and
builtin/send-pack.c to communicate that we are in "push" mode, not
within a fetch or clone. However, this seemed like overkill. It may be
beneficial in the future to pass through a mode like this, but it does
not meet the bar for the immediate need.

Reviewers, please see git-for-windows#5171 for the baseline
implementation of this feature within Git for Windows and thus
microsoft/git. This feature is still under review upstream.
Introduce a new maintenance task, `cache-local-objects`, that operates
on Scalar or VFS for Git repositories with a per-volume, shared object
cache (specified by `gvfs.sharedCache`) to migrate packfiles and loose
objects from the repository object directory to the shared cache.

Older versions of `microsoft/git` incorrectly placed packfiles in the
repository object directory instead of the shared cache; this task will
help clean up existing clones impacted by that issue.

Fixes #716
Add the ability to block built-in commands based on if the `core.gvfs`
setting has the `GVFS_USE_VIRTUAL_FILESYSTEM` bit set. This allows us
to selectively block commands that use the GVFS protocol, but don't use
VFS for Git (for example repos cloned via `scalar clone` against Azure
DevOps).

Signed-off-by: Matthew John Cheetham <mjcheetham@outlook.com>
Loosen the blocking of the `repack` command from all "GVFS repos" (those
that have `core.gvfs` set) to only those that actually use the virtual
file system (VFS for Git only). This allows for `repack` to be used in
Scalar clones.

Signed-off-by: Matthew John Cheetham <mjcheetham@outlook.com>
Loosen the blocking of the `fsck` command from all "GVFS repos" (those
that have `core.gvfs` set) to only those that actually use the virtual
file system (VFS for Git only). This allows for `fsck` to be used in
Scalar clones.

Signed-off-by: Matthew John Cheetham <mjcheetham@outlook.com>
Loosen the blocking of the `prune` command from all "GVFS repos" (those
that have `core.gvfs` set) to only those that actually use the virtual
file system (VFS for Git only). This allows for `prune` to be used in
Scalar clones.

Signed-off-by: Matthew John Cheetham <mjcheetham@outlook.com>
The microsoft/git fork includes pre- and post-command hooks, with the
initial intention of using these for VFS for Git. In that environment,
these are important hooks to avoid concurrent issues when the
virtualization is incomplete.

However, in the Office monorepo the post-command hook is used in a
different way. A custom hook is used to update the sparse-checkout, if
necessary. To avoid this hook from being incredibly slow on every Git
command, this hook checks for the existence of a "sentinel file" that is
written by a custom post-index-change hook and no-ops if that file does
not exist.

However, even this "no-op" is 200ms due to the use of two scripts (one
simple script in .git/hooks/ does some environment checking and then
calls a script from the working directory which actually contains the
logic).

Add a new config option, 'postCommand.strategy', that will allow for
multiple possible strategies in the future. For now, the one we are
adding is 'worktree-change' which states that we should write a
sentinel file instead of running the 'post-index-change' hook and then
skip the 'post-command' hook if the proper sentinel file doesn't exist.
If it does exist, then delete it and run the hook. This behavior is
_only_ triggered, however, if a part of the index changes that is within
the sparse checkout; If only parts of the index change that are not even
checked out on disk, the hook is still skipped.

I originally planned to put this into the repo-settings, but this caused
the repo settings to load in more cases than they did previously. When
there is an invalid boolean config option, this causes failure in new
places. This was caught by t3007.

This behavior is tested in t0401-post-command-hook.sh.

Signed-off-by: Derrick Stolee <stolee@gmail.com>
Replace the special casing of the `worktree` command being blocked on
VFS-enabled repos with the new `BLOCK_ON_VFS_ENABLED` flag.

Signed-off-by: Matthew John Cheetham <mjcheetham@outlook.com>
Signed-off-by: Derrick Stolee <stolee@gmail.com>
Emit a warning message when the `gvfs.sharedCache` option is set that
the `repack` command will not perform repacking on the shared cache.

In the future we can teach `repack` to operate on the shared cache, at
which point we can drop this commit.

Signed-off-by: Matthew John Cheetham <mjcheetham@outlook.com>
This helps t0401 pass while under SANITIZE=leak.

Signed-off-by: Derrick Stolee <stolee@gmail.com>
Currently when the `core.gvfs` setting is set, several commands are
outright blocked from running. Some of these commands, namely `repack`
are actually OK to run in a Scalar clone, even if it uses the GVFS
protocol (for Azure DevOps).

Introduce a different blocking mechanism to only block commands when the
virtual filesystem is being used, rather than as a broad block on any
`core.gvfs` setting.
This new test demonstrates some behavior where a valid packfile is being
rejected by the Git client due to the order in which it is resolving
REF_DELTAs.

The thin packfile has a REF_DELTA chain A->B->C where C is not included
in the packfile. However, the client repository contains both C and B
already. Thus, 'git index-pack' is able to resolve A before resolving B.

When resolving B, it then attempts to resolve any other REF_DELTAs that
are pointing to B as a base. This "revisits" A and complains as if there
is a cycle, but it did not actually detect a cycle.

A fix will arrive in the next change.

Signed-off-by: Derrick Stolee <stolee@gmail.com>
The microsoft/git fork includes pre- and post-command hooks, with the
initial intention of using these for VFS for Git. In that environment,
these are important hooks to avoid concurrent issues when the
virtualization is incomplete.

However, in the Office monorepo the post-command hook is used in a
different way. A custom hook is used to update the sparse-checkout, if
necessary. To avoid this hook from being incredibly slow on every Git
command, this hook checks for the existence of a "sentinel file" that is
written by a custom post-index-change hook and no-ops if that file does
not exist.

However, even this "no-op" is 200ms due to the use of two scripts (one
simple script in .git/hooks/ does some environment checking and then
calls a script from the working directory which actually contains the
logic).

Add a new config option, 'postCommand.strategy', that will allow for
multiple possible strategies in the future. For now, the one we are
adding is 'post-index-change' which states that we should write a
sentinel file instead of running the 'post-index-change' hook and then
skip the 'post-command' hook if the proper sentinel file doesn't exist.
(If it does exist, then delete it and run the hook.)

--- 

This fork contains changes specific to monorepo scenarios. If you are an
external contributor, then please detail your reason for submitting to
this fork:

* [ ] This is an early version of work already under review upstream.
* [ ] This change only applies to interactions with Azure DevOps and the
      GVFS Protocol.
* [ ] This change only applies to the virtualization hook and VFS for
Git.
* [x] This change only applies to custom bits in the microsoft/git fork.
Just like we just did in the backport from my upstream contribution,
let's convert the `curl_easy_setopt()` calls in `gvfs-helper.c` that
still passed `int` constants to pass `long` instead.

Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
This is an early version of work already under review upstream:
gitgitgadget#1906
On Linux, the following command would cause the terminal to be stuck
waiting:

  git fetch origin foobar

The issue would be that the fetch would fail with the error

  fatal: couldn't find remote ref foobar

but the underlying git-gvfs-helper process wouldn't die. The
`subprocess_exit_handler()` method would close its stdin and stdout, but
that wouldn't be enough to cause the process to end, even though the
`packet_read_line_gently()` call that is run in `do_sub_cmd__server()`
in a loop should return -1 and the process should the terminate
peacefully.

While it is unclear why this does not happen, there may be other
conditions where the `gvfs-helper` process would not terminate. Let's
ensure that the gvfs-helper-client process cleans up the gvfs-helper
server processes that it spawned upon exit.

Reported-by: Stuart Wilcox Humilde <stuartwilcox@microsoft.com>
Co-authored-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Derrick Stolee <stolee@gmail.com>
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
This topic branch has backports of cURL compile fixes in the `osx-gcc`
job, plus a bonus `gvfs-helper` follow-up fix.

Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
On Linux, the following command would cause the terminal to be stuck
waiting:

```
  git fetch origin foobar
```

The issue would be that the fetch would fail with the error

```
  fatal: couldn't find remote ref foobar
```

but the underlying `git-gvfs-helper` process wouldn't die. The
`subprocess_exit_handler()` method would close its stdin and stdout, but
that wouldn't be enough to cause the process to end.

This PR addresses that by skipping the `finish_command()` call of the
`clean_on_exit_handler` and instead lets `cleanup_children()` send a
SIGTERM to terminate those spawned child processes.
@dscho dscho force-pushed the tentative/vfs-2.50.0 branch from fc2fed8 to d836430 Compare June 16, 2025 19:32
@dscho
Copy link
Member Author

dscho commented Jun 16, 2025

git range-diff --creation-factor=95 v2.49.0.windows.1..microsoft/clean/vfs-2.49.0 v2.50.0.windows.1..
  • 2: 5818e92 = 1: 977303f sparse-index.c: fix use of index hashes in expand_index

  • 4: d563c8e = 2: 24773eb t5300: confirm failure of git index-pack when non-idx suffix requested

  • 7: b1ec25c = 3: 4c1e6f7 survey: calculate more stats on refs

  • 1: 75a06eb = 4: 4008c8f t: remove advice from some tests

  • 3: 6960bb4 = 5: cbddd66 t1092: add test for untracked files and directories

  • 5: a08fc1b = 6: c4c1106 index-pack: disable rev-index if index file has non .idx suffix

  • 6: bb0ef4f = 7: 27fbb52 trace2: prefetch value of GIT_TRACE2_DST_DEBUG at startup

  • 8: 8993408 = 8: 21595d0 survey: show some commits/trees/blobs histograms

  • 9: 83af602 = 9: a08fec1 survey: add vector of largest objects for various scaling dimensions

  • 10: 70075b5 = 10: 92d6c1b survey: add pathname of blob or tree to large_item_vec

  • 11: c3e2a5b ! 11: 181c548 survey: add commit-oid to large_item detail

    @@ builtin/survey.c: static void maybe_insert_large_item(struct large_item_vec *vec
      		memset(&vec->items[k], 0, sizeof(struct large_item));
      		vec->items[k].size = size;
      		oidcpy(&vec->items[k].oid, oid);
    -+		oidcpy(&vec->items[k].containing_commit_oid, containing_commit_oid ? containing_commit_oid : null_oid());
    ++		oidcpy(&vec->items[k].containing_commit_oid, containing_commit_oid ? containing_commit_oid : null_oid(the_hash_algo));
      		strbuf_init(&vec->items[k].name, 0);
      		if (name && *name)
      			strbuf_addstr(&vec->items[k].name, name);
  • 12: 692393b = 12: 6ad0640 survey: add commit name-rev lookup to each large_item

  • 13: 8248df7 = 13: 46a240d survey: add --no-name-rev option

  • 14: dfd7966 = 14: 3972ece survey: started TODO list at bottom of source file

  • 15: 63c1e62 = 15: 3af31c4 survey: expanded TODO list at the bottom of the source file

  • 16: e376b14 = 16: d9b3cb5 survey: expanded TODO with more notes

  • 17: 5886eee = 17: 0ac9ed9 reset --stdin: trim carriage return from the paths

  • 18: cb44006 ! 18: 8e3199d Identify microsoft/git via a distinct version suffix

    @@ Commit message
      ## GIT-VERSION-GEN ##
     @@
      
    - DEF_VER=v2.49.0
    + DEF_VER=v2.50.0
      
     +# Identify microsoft/git via a distinct version suffix
     +DEF_VER=$DEF_VER.vfs.0.0
  • 19: f66336e = 19: b7c43e3 gvfs: ensure that the version is based on a GVFS tag

  • 20: 36c287d = 20: 3a29ee7 gvfs: add a GVFS-specific header file

  • 21: 3081376 ! 21: b6addbb gvfs: add the core.gvfs config setting

    @@ Makefile: LIB_OBJS += git-zlib.o
      LIB_OBJS += grep.o
     +LIB_OBJS += gvfs.o
      LIB_OBJS += hash-lookup.o
    + LIB_OBJS += hash.o
      LIB_OBJS += hashmap.o
    - LIB_OBJS += help.o
     
      ## config.c ##
     @@
    @@ meson.build: libgit_sources = [
        'grep.c',
     +  'gvfs.c',
        'hash-lookup.c',
    +   'hash.c',
        'hashmap.c',
    -   'help.c',
  • 22: 8e016a5 = 22: b22648e gvfs: add the feature to skip writing the index' SHA-1

  • 23: b00139b = 23: 7c9bd46 gvfs: add the feature that blobs may be missing

  • 24: 2298a2b = 24: 31c9170 gvfs: prevent files to be deleted outside the sparse checkout

  • 25: 53bb1c4 ! 25: e99fc40 gvfs: optionally skip reachability checks/upload pack during fetch

    @@ connected.c
      #include "gettext.h"
      #include "hex.h"
     +#include "gvfs.h"
    - #include "object-store-ll.h"
    + #include "object-store.h"
      #include "run-command.h"
      #include "sigchain.h"
     @@ connected.c: int check_connected(oid_iterate_fn fn, void *cb_data,
  • 26: 7d7f7f7 = 26: 9e3cb16 gvfs: ensure all filters and EOL conversions are blocked

  • 27: d481378 ! 27: 0682277 gvfs: allow "virtualizing" objects

    @@ environment.h: extern const char *comment_line_str;
      # endif /* USE_THE_REPOSITORY_VARIABLE */
      #endif /* ENVIRONMENT_H */
     
    - ## object-file.c ##
    + ## object-store.c ##
     @@
    - #include "fsck.h"
    + #include "environment.h"
    + #include "gettext.h"
    + #include "hex.h"
    ++#include "hook.h"
    + #include "khash.h"
    + #include "lockfile.h"
      #include "loose.h"
    - #include "object-file-convert.h"
    +@@
    + #include "strbuf.h"
    + #include "strvec.h"
    + #include "submodule.h"
     +#include "trace.h"
    -+#include "hook.h"
    + #include "write-or-die.h"
      
    - /* The maximum size for an object header. */
    - #define MAX_HEADER_LEN 32
    -@@ object-file.c: void disable_obj_read_lock(void)
    + KHASH_INIT(odb_path_map, const char * /* key: odb_path */,
    +@@ object-store.c: void disable_obj_read_lock(void)
      	pthread_mutex_destroy(&obj_read_mutex);
      }
      
    @@ object-file.c: void disable_obj_read_lock(void)
      int fetch_if_missing = 1;
      
      static int do_oid_object_info_extended(struct repository *r,
    -@@ object-file.c: static int do_oid_object_info_extended(struct repository *r,
    +@@ object-store.c: static int do_oid_object_info_extended(struct repository *r,
      	int rtype;
      	const struct object_id *real = oid;
      	int already_retried = 0;
    @@ object-file.c: static int do_oid_object_info_extended(struct repository *r,
      
      
      	if (flags & OBJECT_INFO_LOOKUP_REPLACE)
    -@@ object-file.c: static int do_oid_object_info_extended(struct repository *r,
    +@@ object-store.c: static int do_oid_object_info_extended(struct repository *r,
      	if (!oi)
      		oi = &blank_oi;
      
     +retry:
    - 	co = find_cached_object(real);
    + 	co = find_cached_object(r->objects, real);
      	if (co) {
      		if (oi->typep)
    -@@ object-file.c: static int do_oid_object_info_extended(struct repository *r,
    +@@ object-store.c: static int do_oid_object_info_extended(struct repository *r,
      			reprepare_packed_git(r);
      			if (find_pack_entry(r, real, &e))
      				break;
  • 28: c6137ba ! 28: 731f770 Hydrate missing loose objects in check_and_freshen()

    @@ contrib/long-running-read-object/example.pl (new)
     +}
     
      ## object-file.c ##
    +@@ object-file.c: static int check_and_freshen_nonlocal(const struct object_id *oid, int freshen)
    + 
    + static int check_and_freshen(const struct object_id *oid, int freshen)
    + {
    +-	return check_and_freshen_local(oid, freshen) ||
    ++	int ret;
    ++	int tried_hook = 0;
    ++
    ++retry:
    ++	ret = check_and_freshen_local(oid, freshen) ||
    + 	       check_and_freshen_nonlocal(oid, freshen);
    ++	if (!ret && core_virtualize_objects && !tried_hook) {
    ++		tried_hook = 1;
    ++		if (!read_object_process(oid))
    ++			goto retry;
    ++	}
    ++
    ++	return ret;
    + }
    + 
    + int has_loose_object_nonlocal(const struct object_id *oid)
    +
    + ## object-store.c ##
     @@
    - #include "object-file-convert.h"
    - #include "trace.h"
    - #include "hook.h"
    + #include "object-store.h"
    + #include "packfile.h"
    + #include "path.h"
    ++#include "pkt-line.h"
    + #include "promisor-remote.h"
    + #include "quote.h"
    + #include "replace-object.h"
    + #include "run-command.h"
    + #include "setup.h"
     +#include "sigchain.h"
    + #include "strbuf.h"
    + #include "strvec.h"
     +#include "sub-process.h"
    -+#include "pkt-line.h"
    - 
    - /* The maximum size for an object header. */
    - #define MAX_HEADER_LEN 32
    -@@ object-file.c: int has_alt_odb(struct repository *r)
    + #include "submodule.h"
    + #include "trace.h"
    + #include "write-or-die.h"
    +@@ object-store.c: int has_alt_odb(struct repository *r)
      	return !!r->objects->odb->next;
      }
      
    @@ object-file.c: int has_alt_odb(struct repository *r)
     +				    &entry->supported_capabilities);
     +}
     +
    -+static int read_object_process(const struct object_id *oid)
    ++int read_object_process(const struct object_id *oid)
     +{
     +	int err;
     +	struct read_object_process *entry;
    @@ object-file.c: int has_alt_odb(struct repository *r)
     +	return err;
     +}
     +
    - /* Returns 1 if we have successfully freshened the file, 0 otherwise. */
    - static int freshen_file(const char *fn)
    - {
    -@@ object-file.c: static int check_and_freshen_nonlocal(const struct object_id *oid, int freshen)
    + int obj_read_use_lock = 0;
    + pthread_mutex_t obj_read_mutex;
      
    - static int check_and_freshen(const struct object_id *oid, int freshen)
    - {
    --	return check_and_freshen_local(oid, freshen) ||
    -+	int ret;
    -+	int tried_hook = 0;
    -+
    -+retry:
    -+	ret = check_and_freshen_local(oid, freshen) ||
    - 	       check_and_freshen_nonlocal(oid, freshen);
    -+	if (!ret && core_virtualize_objects && !tried_hook) {
    -+		tried_hook = 1;
    -+		if (!read_object_process(oid))
    -+			goto retry;
    -+	}
    -+
    -+	return ret;
    - }
    - 
    - int has_loose_object_nonlocal(const struct object_id *oid)
    -@@ object-file.c: void disable_obj_read_lock(void)
    +@@ object-store.c: void disable_obj_read_lock(void)
      	pthread_mutex_destroy(&obj_read_mutex);
      }
      
    @@ object-file.c: void disable_obj_read_lock(void)
      int fetch_if_missing = 1;
      
      static int do_oid_object_info_extended(struct repository *r,
    -@@ object-file.c: static int do_oid_object_info_extended(struct repository *r,
    +@@ object-store.c: static int do_oid_object_info_extended(struct repository *r,
      				break;
      			if (core_virtualize_objects && !tried_hook) {
      				tried_hook = 1;
    @@ object-file.c: static int do_oid_object_info_extended(struct repository *r,
      			}
      		}
     
    + ## object-store.h ##
    +@@ object-store.h: void *read_object_with_reference(struct repository *r,
    + 				 unsigned long *size,
    + 				 struct object_id *oid_ret);
    + 
    ++int read_object_process(const struct object_id *oid);
    ++
    + #endif /* OBJECT_STORE_H */
    +
      ## t/meson.build ##
     @@ t/meson.build: integration_tests = [
        't0410-partial-clone.sh',
  • 29: 9090263 ! 29: 20a6871 sha1_file: when writing objects, skip the read_object_hook

    @@ object-file.c: int has_loose_object_nonlocal(const struct object_id *oid)
     +	return check_and_freshen(oid, 0, 0);
      }
      
    - static void mmap_limit_check(size_t length)
    + int format_object_header(char *str, size_t size, enum object_type type,
     @@ object-file.c: static int write_loose_object(const struct object_id *oid, char *hdr,
      					  FOF_SKIP_COLLISION_CHECK);
      }
    @@ object-file.c: int write_object_file_flags(const void *buf, size_t len,
      		return 0;
      	if (write_loose_object(oid, hdr, hdrlen, buf, len, 0, flags))
      		return -1;
    -@@ object-file.c: int write_object_file_literally(const void *buf, size_t len,
    - 
    - 	if (!(flags & HASH_WRITE_OBJECT))
    - 		goto cleanup;
    --	if (freshen_packed_object(oid) || freshen_loose_object(oid))
    -+	if (freshen_packed_object(oid) || freshen_loose_object(oid, 1))
    - 		goto cleanup;
    - 	status = write_loose_object(oid, header, hdrlen, buf, len, 0, 0);
    - 	if (compat_type != -1)
     
      ## t/t0410/read-object ##
     @@ t/t0410/read-object: while (1) {
  • 30: a4b3a90 = 30: 2f03502 gvfs: add global command pre and post hook procs

  • 31: da5bd05 = 31: bb22ca0 t0400: verify that the hook is called correctly from a subdirectory

  • 32: fd95c5c = 32: f4e18ac t0400: verify core.hooksPath is respected by pre-command

  • 33: ce58c90 = 33: 91cdb9c Pass PID of git process to hooks.

  • 34: 26c0838 ! 34: 4ded04a sparse-checkout: update files with a modify/delete conflict

    @@ Metadata
     Author: Kevin Willford <kewillf@microsoft.com>
     
      ## Commit message ##
    -    sparse-checkout: update files with a modify/delete conflict
    +    sparse-checkout: make sure to update files with a modify/delete conflict
     
         When using the sparse-checkout feature, the file might not be on disk
    -    because the skip-worktree bit is on.
    +    because the skip-worktree bit is on. This used to be a bug in the
    +    (hence deleted) `recursive` strategy. Let's ensure that this bug does
    +    not resurface.
     
         Signed-off-by: Kevin Willford <kewillf@microsoft.com>
    -
    - ## merge-recursive.c ##
    -@@ merge-recursive.c: static int handle_change_delete(struct merge_options *opt,
    - 		 * path.  We could call update_file_flags() with update_cache=0
    - 		 * and update_wd=0, but that's a no-op.
    - 		 */
    --		if (change_branch != opt->branch1 || alt_path)
    -+		if (change_branch != opt->branch1 || alt_path || !file_exists(update_path))
    - 			ret = update_file(opt, 0, changed, update_path);
    - 	}
    - 	free(alt_path);
    +    Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
     
      ## t/meson.build ##
     @@ t/meson.build: integration_tests = [
  • 35: 5b3d2d8 ! 35: 8a127d0 sparse-checkout: avoid writing entries with the skip-worktree bit

    @@ Commit message
         Signed-off-by: Kevin Willford <kewillf@microsoft.com>
     
      ## apply.c ##
    +@@
    + #include "dir.h"
    + #include "environment.h"
    + #include "gettext.h"
    ++#include "gvfs.h"
    + #include "hex.h"
    + #include "xdiff-interface.h"
    + #include "merge-ll.h"
     @@ apply.c: static int checkout_target(struct index_state *istate,
      {
      	struct checkout costate = CHECKOUT_INIT;
    @@ apply.c: static int checkout_target(struct index_state *istate,
     +	 * the working directory version up to date with what it
     +	 * changed the index version to be.
     +	 */
    -+	if (ce_skip_worktree(ce))
    ++	if (gvfs_config_is_set(GVFS_USE_VIRTUAL_FILESYSTEM) &&
    ++	    ce_skip_worktree(ce))
     +		return 0;
     +
      	costate.refresh_cache = 1;
  • 36: 0e66d50 = 36: ace1f7e Do not remove files outside the sparse-checkout

  • 37: 625e606 ! 37: a765693 send-pack: do not check for sha1 file when GVFS_MISSING_OK set

    @@ Commit message
     
      ## send-pack.c ##
     @@
    + #include "commit.h"
      #include "date.h"
      #include "gettext.h"
    - #include "hex.h"
     +#include "gvfs.h"
    - #include "object-store-ll.h"
    + #include "hex.h"
    + #include "object-store.h"
      #include "pkt-line.h"
    - #include "sideband.h"
     @@ send-pack.c: int option_parse_push_signed(const struct option *opt,
      static void feed_object(struct repository *r,
      			const struct object_id *oid, FILE *fh, int negative)
      {
    --	if (negative &&
    -+	if (negative && !gvfs_config_is_set(GVFS_MISSING_OK) &&
    - 	    !repo_has_object_file_with_flags(r, oid,
    - 					     OBJECT_INFO_SKIP_FETCH_OBJECT |
    - 					     OBJECT_INFO_QUICK))
    +-	if (negative && !has_object(r, oid, 0))
    ++	if (negative && !gvfs_config_is_set(GVFS_MISSING_OK) && !has_object(r, oid, 0))
    + 		return;
    + 
    + 	if (negative)
  • 38: c8697f1 = 38: 642efee cache-tree: remove use of strbuf_addf in update_one

  • 39: b1f8410 ! 39: cc40088 gvfs: block unsupported commands when running in a GVFS repo

    @@ Commit message
     
      ## builtin/gc.c ##
     @@
    - #include "date.h"
    + #include "dir.h"
      #include "environment.h"
      #include "hex.h"
     +#include "gvfs.h"
      #include "config.h"
      #include "tempfile.h"
      #include "lockfile.h"
    -@@ builtin/gc.c: struct repository *repo UNUSED)
    +@@ builtin/gc.c: int cmd_gc(int argc,
      	if (quiet)
      		strvec_push(&repack, "-q");
      
  • 40: 891cef3 = 40: c7928a6 worktree: allow in Scalar repositories

  • 41: 589c2f5 = 41: f2bc261 gvfs: allow overriding core.gvfs

  • 42: 193d112 = 42: 02d8514 BRANCHES.md: Add explanation of branches and using forks

  • 43: 8d90680 = 43: 89dddeb Add virtual file system settings and hook proc

  • 44: ea134ec = 44: 796b668 virtualfilesystem: don't run the virtual file system hook if the index has been redirected

  • 45: efd39ab = 45: faad088 virtualfilesystem: check if directory is included

  • 46: 19f7756 = 46: db8995e backwards-compatibility: support the post-indexchanged hook

  • 47: b4010fd = 47: 14c43c9 gvfs: verify that the built-in FSMonitor is disabled

  • 48: e76967d = 48: 299e951 wt-status: add trace2 data for sparse-checkout percentage

  • 49: 4783edb = 49: 86df488 wt-status: add VFS hydration percentage to normal git status output

  • 50: 24c297d ! 50: 243afe7 status: add status serialization mechanism

    @@ builtin/commit.c: struct repository *repo UNUSED)
      		OPT_CALLBACK_F(0, "porcelain", &status_format,
      		  N_("version"), N_("machine-readable output"),
      		  PARSE_OPT_OPTARG, opt_parse_porcelain),
    -+		{ OPTION_CALLBACK, 0, "serialize", &status_format,
    ++		OPT_CALLBACK_F(0, "serialize", &status_format,
     +		  N_("version"), N_("serialize raw status data to stdout"),
    -+		  PARSE_OPT_OPTARG | PARSE_OPT_NONEG, opt_parse_serialize },
    -+		{ OPTION_CALLBACK, 0, "deserialize", NULL,
    ++		  PARSE_OPT_OPTARG | PARSE_OPT_NONEG, opt_parse_serialize),
    ++		OPT_CALLBACK_F(0, "deserialize", NULL,
     +		  N_("path"), N_("deserialize raw status data from file"),
    -+		  PARSE_OPT_OPTARG, opt_parse_deserialize },
    ++		  PARSE_OPT_OPTARG, opt_parse_deserialize),
      		OPT_SET_INT(0, "long", &status_format,
      			    N_("show status in long format (default)"),
      			    STATUS_FORMAT_LONG),
  • 51: 489e791 = 51: d17cbee Teach ahead-behind and serialized status to play nicely together

  • 52: be1a3ef ! 52: 95c7762 status: serialize to path

    @@ builtin/commit.c: static int opt_parse_porcelain(const struct option *opt, const
     @@ builtin/commit.c: struct repository *repo UNUSED)
      		  N_("version"), N_("machine-readable output"),
      		  PARSE_OPT_OPTARG, opt_parse_porcelain),
    - 		{ OPTION_CALLBACK, 0, "serialize", &status_format,
    + 		OPT_CALLBACK_F(0, "serialize", &status_format,
     -		  N_("version"), N_("serialize raw status data to stdout"),
     +		  N_("path"), N_("serialize raw status data to path or stdout"),
    - 		  PARSE_OPT_OPTARG | PARSE_OPT_NONEG, opt_parse_serialize },
    - 		{ OPTION_CALLBACK, 0, "deserialize", NULL,
    + 		  PARSE_OPT_OPTARG | PARSE_OPT_NONEG, opt_parse_serialize),
    + 		OPT_CALLBACK_F(0, "deserialize", NULL,
      		  N_("path"), N_("deserialize raw status data from file"),
     @@ builtin/commit.c: struct repository *repo UNUSED)
      	if (s.relative_paths)
  • 53: 956982d = 53: ee09884 status: reject deserialize in V2 and conflicts

  • 54: 9218600 = 54: 5c68d91 serialize-status: serialize global and repo-local exclude file metadata

  • 55: 41cac10 ! 55: 3b219d7 status: deserialization wait

    @@ builtin/commit.c: static int git_status_config(const char *k, const char *v,
      		enum untracked_status_type u;
      
     @@ builtin/commit.c: struct repository *repo UNUSED)
    - 		{ OPTION_CALLBACK, 0, "deserialize", NULL,
    + 		OPT_CALLBACK_F(0, "deserialize", NULL,
      		  N_("path"), N_("deserialize raw status data from file"),
    - 		  PARSE_OPT_OPTARG, opt_parse_deserialize },
    -+		{ OPTION_CALLBACK, 0, "deserialize-wait", NULL,
    + 		  PARSE_OPT_OPTARG, opt_parse_deserialize),
    ++		OPT_CALLBACK_F(0, "deserialize-wait", NULL,
     +		  N_("fail|block|no"), N_("how to wait if status cache file is invalid"),
    -+		  PARSE_OPT_OPTARG, opt_parse_deserialize_wait },
    ++		  PARSE_OPT_OPTARG, opt_parse_deserialize_wait),
      		OPT_SET_INT(0, "long", &status_format,
      			    N_("show status in long format (default)"),
      			    STATUS_FORMAT_LONG),
  • 56: 3218b44 (upstream deleted merge-recursive.c in ad45b32) < -: ------------ merge-recursive: avoid confusing logic in was_dirty()

  • 57: d33b888 (upstream deleted merge-recursive.c in ad45b32)< -: ------------ merge-recursive: add some defensive coding to was_dirty()

  • 58: 1756f59 (upstream deleted merge-recursive.c in ad45b32)< -: ------------ merge-recursive: teach was_dirty() about the virtualfilesystem

  • 59: e1d8c75 = 56: 03d7bae status: deserialize with -uno does not print correct hint

  • 60: 4ed13dd = 57: ed3afe5 fsmonitor: check CE_FSMONITOR_VALID in ce_uptodate

  • 61: 89680d2 = 58: b30360f fsmonitor: add script for debugging and update script for tests

  • 62: ae21aba = 59: 1203c1f status: disable deserialize when verbose output requested.

  • 63: 823a8b6 = 60: d2e2c19 t7524: add test for verbose status deserialzation

  • 64: 51e4b61 = 61: a06fc41 deserialize-status: silently fallback if we cannot read cache file

  • 65: d3bd23f ! 62: cad0d38 gvfs:trace2:data: add trace2 tracing around read_object_process

    @@ Commit message
     
         Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
     
    - ## object-file.c ##
    + ## object-store.c ##
     @@
    - #include "loose.h"
    - #include "object-file-convert.h"
    + #include "sub-process.h"
    + #include "submodule.h"
      #include "trace.h"
     +#include "trace2.h"
    - #include "hook.h"
    - #include "sigchain.h"
    - #include "sub-process.h"
    -@@ object-file.c: static int read_object_process(const struct object_id *oid)
    + #include "write-or-die.h"
    + 
    + KHASH_INIT(odb_path_map, const char * /* key: odb_path */,
    +@@ object-store.c: int read_object_process(const struct object_id *oid)
      
      	start = getnanotime();
      
    @@ object-file.c: static int read_object_process(const struct object_id *oid)
      	if (!subprocess_map_initialized) {
      		subprocess_map_initialized = 1;
      		hashmap_init(&subprocess_map, (hashmap_cmp_fn)cmd2process_cmp,
    -@@ object-file.c: static int read_object_process(const struct object_id *oid)
    +@@ object-store.c: int read_object_process(const struct object_id *oid)
      		if (subprocess_start(&subprocess_map, &entry->subprocess, cmd,
      				     start_read_object_fn)) {
      			free(entry);
    @@ object-file.c: static int read_object_process(const struct object_id *oid)
      
      	sigchain_push(SIGPIPE, SIG_IGN);
      
    -@@ object-file.c: static int read_object_process(const struct object_id *oid)
    +@@ object-store.c: int read_object_process(const struct object_id *oid)
      
      	trace_performance_since(start, "read_object_process");
      
  • 66: 2077fff = 63: 4fb72fa gvfs:trace2:data: status deserialization information

  • 67: 5a522b5 = 64: 158f193 gvfs:trace2:data: status serialization

  • 68: 8315cfe = 65: 8a6d3a6 gvfs:trace2:data: add vfs stats

  • 69: 08edf4a = 66: 3caf354 trace2: refactor setting process starting time

  • 70: cd233f1 = 67: d70fd63 trace2:gvfs:experiment: clear_ce_flags_1

  • 71: 27c3221 = 68: 0e2c869 trace2:gvfs:experiment: report_tracking

  • 72: 6d0ca07 = 69: 2eda7e6 trace2:gvfs:experiment: read_cache: annotate thread usage in read-cache

  • 73: 473b639 = 70: 5c2cd7c trace2:gvfs:experiment: read-cache: time read/write of cache-tree extension

  • 74: 5421a8c = 71: 0ac365f trace2:gvfs:experiment: add region to apply_virtualfilesystem()

  • 75: faaf7a9 = 72: 5ff9d32 trace2:gvfs:experiment: add region around unpack_trees()

  • 76: cde8c54 ! 73: 77eb686 trace2:gvfs:experiment: add region to cache_tree_fully_valid()

    @@ cache-tree.c: static void discard_unused_subtrees(struct cache_tree *it)
      	int i;
      	if (!it)
     @@ cache-tree.c: int cache_tree_fully_valid(struct cache_tree *it)
    - 	if (it->entry_count < 0 || !repo_has_object_file(the_repository, &it->oid))
    + 		       HAS_OBJECT_RECHECK_PACKED | HAS_OBJECT_FETCH_PROMISOR))
      		return 0;
      	for (i = 0; i < it->subtree_nr; i++) {
     -		if (!cache_tree_fully_valid(it->down[i]->cache_tree))
  • 77: 45e8f91 ! 74: 84ebd4d trace2:gvfs:experiment: add unpack_entry() counter to unpack_trees() and report_tracking()

    @@ Commit message
     
      ## builtin/checkout.c ##
     @@
    - #include "merge-recursive.h"
    + #include "object-file.h"
      #include "object-name.h"
    - #include "object-store-ll.h"
    + #include "object-store.h"
     +#include "packfile.h"
      #include "parse-options.h"
      #include "path.h"
  • 78: 9a56222 = 75: 43d8148 trace2:gvfs:experiment: increase default event depth for unpack-tree data

  • 79: bf30c8e = 76: 862c7cd trace2:gvfs:experiment: add data for check_updates() in unpack_trees()

  • 80: 4468fcd = 77: 0effeb2 Trace2:gvfs:experiment: capture more 'tracking' details

  • 81: b01b2a1 = 78: 8b649ed credential: set trace2_child_class for credential manager children

  • 82: 4a9589d = 79: d64b32b sub-process: do not borrow cmd pointer from caller

  • 83: 926e6cf = 80: 2cd400a sub-process: add subprocess_start_argv()

  • 84: 4826944 ! 81: 72a4a1e sha1-file: add function to update existing loose object cache

    @@ object-file.c: struct oidtree *odb_loose_cache(struct object_directory *odb,
      {
      	oidtree_clear(odb->loose_objects_cache);
     
    - ## object-store-ll.h ##
    -@@ object-store-ll.h: void restore_primary_odb(struct object_directory *restore_odb, const char *old_p
    + ## object-file.h ##
    +@@ object-file.h: struct object_directory;
      struct oidtree *odb_loose_cache(struct object_directory *odb,
    - 				  const struct object_id *oid);
    + 				const struct object_id *oid);
      
     +/*
     + * Add a new object to the loose object cache (possibly after the
  • 85: 13afa3c = 82: 24913f1 packfile: add install_packed_git_and_mru()

  • 86: bdd8bcb ! 83: 7862c20 index-pack: avoid immediate object fetch while parsing packfile

    @@ Commit message
     
      ## builtin/index-pack.c ##
     @@ builtin/index-pack.c: static void sha1_object(const void *data, struct object_entry *obj_entry,
    + 	if (startup_info->have_repository) {
      		read_lock();
    - 		collision_test_needed =
    - 			repo_has_object_file_with_flags(the_repository, oid,
    --							OBJECT_INFO_QUICK);
    -+							OBJECT_INFO_FOR_PREFETCH);
    + 		collision_test_needed = has_object(the_repository, oid,
    +-						   HAS_OBJECT_FETCH_PROMISOR);
    ++						   OBJECT_INFO_FOR_PREFETCH);
      		read_unlock();
      	}
      
  • 95: 42cce0a ! 84: dfa84b1 gvfs-helper: create tool to fetch objects using the GVFS Protocol

    @@ Makefile: LIB_OBJS += gpg-interface.o
      LIB_OBJS += gvfs.o
     +LIB_OBJS += gvfs-helper-client.o
      LIB_OBJS += hash-lookup.o
    + LIB_OBJS += hash.o
      LIB_OBJS += hashmap.o
    - LIB_OBJS += help.o
     @@ Makefile: endif
              endif
      	BASIC_CFLAGS += $(CURL_CFLAGS)
    @@ gvfs-helper-client.c (new)
     @@
     +#define USE_THE_REPOSITORY_VARIABLE
     +#include "git-compat-util.h"
    ++#include "gvfs-helper-client.h"
     +#include "hex.h"
    -+#include "strvec.h"
    -+#include "trace2.h"
    -+#include "oidset.h"
    ++#include "object-file.h"
     +#include "object.h"
    -+#include "object-store.h"
    -+#include "gvfs-helper-client.h"
    -+#include "sub-process.h"
    -+#include "sigchain.h"
    ++#include "oidset.h"
    ++#include "packfile.h"
     +#include "pkt-line.h"
     +#include "quote.h"
    -+#include "packfile.h"
    ++#include "sigchain.h"
    ++#include "strvec.h"
    ++#include "sub-process.h"
    ++#include "trace2.h"
     +
     +static struct oidset gh_client__oidset_queued = OIDSET_INIT;
     +static unsigned long gh_client__oidset_count;
    @@ gvfs-helper-client.h (new)
     +
     +struct repository;
     +struct commit;
    ++struct object_id;
     +
     +enum gh_client__created {
     +	/*
    @@ gvfs-helper.c (new)
     +	int show_progress;
     +
     +	int depth;
    -+	int block_size;
    ++	unsigned int block_size;
     +
     +	enum gh__cache_server_mode cache_server_mode;
     +} gh__cmd_opts;
    @@ gvfs-helper.c (new)
     +static enum gh__error_code do_sub_cmd__get(int argc, const char **argv)
     +{
     +	static struct option get_options[] = {
    -+		OPT_MAGNITUDE('b', "block-size", &gh__cmd_opts.block_size,
    -+			      N_("number of objects to request at a time")),
    ++		OPT_UNSIGNED('b', "block-size", &gh__cmd_opts.block_size,
    ++			     N_("number of objects to request at a time")),
     +		OPT_INTEGER('d', "depth", &gh__cmd_opts.depth,
     +			    N_("Commit depth")),
     +		OPT_END(),
    @@ gvfs-helper.c (new)
     +static enum gh__error_code do_sub_cmd__server(int argc, const char **argv)
     +{
     +	static struct option server_options[] = {
    -+		OPT_MAGNITUDE('b', "block-size", &gh__cmd_opts.block_size,
    -+			      N_("number of objects to request at a time")),
    ++		OPT_UNSIGNED('b', "block-size", &gh__cmd_opts.block_size,
    ++			     N_("number of objects to request at a time")),
     +		OPT_INTEGER('d', "depth", &gh__cmd_opts.depth,
     +			    N_("Commit depth")),
     +		OPT_END(),
    @@ meson.build: libgit_sources = [
        'gvfs.c',
     +  'gvfs-helper-client.c',
        'hash-lookup.c',
    +   'hash.c',
        'hashmap.c',
    -   'help.c',
    -@@ meson.build: if get_option('curl').enabled()
    +@@ meson.build: if curl.found()
          )
        endif
      
    @@ meson.build: endforeach
        'git-shell',
        'git-upload-archive',
     
    - ## object-file.c ##
    + ## object-store.c ##
     @@
    - #include "sigchain.h"
    - #include "sub-process.h"
    - #include "pkt-line.h"
    + #include "dir.h"
    + #include "environment.h"
    + #include "gettext.h"
     +#include "gvfs-helper-client.h"
    - 
    - /* The maximum size for an object header. */
    - #define MAX_HEADER_LEN 32
    -@@ object-file.c: static int do_oid_object_info_extended(struct repository *r,
    + #include "hex.h"
    + #include "hook.h"
    + #include "khash.h"
    +@@ object-store.c: static int do_oid_object_info_extended(struct repository *r,
      	const struct object_id *real = oid;
      	int already_retried = 0;
      	int tried_hook = 0;
    @@ object-file.c: static int do_oid_object_info_extended(struct repository *r,
      
      	if (flags & OBJECT_INFO_LOOKUP_REPLACE)
      		real = lookup_replace_object(r, oid);
    -@@ object-file.c: static int do_oid_object_info_extended(struct repository *r,
    +@@ object-store.c: static int do_oid_object_info_extended(struct repository *r,
      		if (!loose_object_info(r, real, oi, flags))
      			return 0;
      
    @@ promisor-remote.c
      #include "git-compat-util.h"
     +#include "environment.h"
      #include "gettext.h"
    - #include "hex.h"
    - #include "object-store-ll.h"
     +#include "gvfs-helper-client.h"
    + #include "hex.h"
    + #include "object-store.h"
      #include "promisor-remote.h"
    - #include "config.h"
    - #include "trace2.h"
     @@ promisor-remote.c: struct promisor_remote *repo_promisor_remote_find(struct repository *r,
      
      int repo_has_promisor_remote(struct repository *r)
  • 96: 22fa4cf ! 85: 39d2919 sha1-file: create shared-cache directory if it doesn't exist

    @@ gvfs-helper-client.c
     @@
      #define USE_THE_REPOSITORY_VARIABLE
      #include "git-compat-util.h"
    ++#include "dir.h"
     +#include "environment.h"
    + #include "gvfs-helper-client.h"
      #include "hex.h"
    - #include "strvec.h"
    - #include "trace2.h"
    + #include "object-file.h"
     @@ gvfs-helper-client.c: static int gh_client__get__receive_response(
      	return err;
      }
    @@ gvfs-helper.c: static void approve_cache_server_creds(void)
      
      /*
     
    - ## object-file.c ##
    -@@ object-file.c: const char *loose_object_path(struct repository *r, struct strbuf *buf,
    - 	return odb_loose_path(r->objects->odb, buf, oid);
    + ## object-store.c ##
    +@@ object-store.c: int odb_mkstemp(struct strbuf *temp_filename, const char *pattern)
    + 	return xmkstemp_mode(temp_filename->buf, mode);
      }
      
     +static int gvfs_matched_shared_cache_to_alternate;
    @@ object-file.c: const char *loose_object_path(struct repository *r, struct strbuf
      /*
       * Return non-zero iff the path is usable as an alternate object database.
       */
    -@@ object-file.c: static int alt_odb_usable(struct raw_object_store *o,
    +@@ object-store.c: static int alt_odb_usable(struct raw_object_store *o,
      {
      	int r;
      
    @@ object-file.c: static int alt_odb_usable(struct raw_object_store *o,
     +		 */
     +		strbuf_addf(&buf_pack_foo, "%s/pack/foo", path->buf);
     +
    -+		scld = safe_create_leading_directories(buf_pack_foo.buf);
    ++		scld = safe_create_leading_directories(the_repository, buf_pack_foo.buf);
     +		if (scld != SCLD_OK && scld != SCLD_EXISTS) {
     +			error_errno(_("could not create shared-cache ODB '%s'"),
     +				    gvfs_shared_cache_pathname.buf);
    @@ object-file.c: static int alt_odb_usable(struct raw_object_store *o,
      	/* Detect cases where alternate disappeared */
      	if (!is_directory(path->buf)) {
      		error(_("object directory %s does not exist; "
    -@@ object-file.c: void prepare_alt_odb(struct repository *r)
    +@@ object-store.c: void prepare_alt_odb(struct repository *r)
      	link_alt_odb_entries(r, r->objects->alternate_db, PATH_SEP, NULL, 0);
      
      	read_info_alternates(r, r->objects->odb->path, 0);
  • 97: 167c330 = 86: 234de38 gvfs-helper: better handling of network errors

  • 98: c1fdd3b ! 87: 2df1986 gvfs-helper-client: properly update loose cache with fetched OID

    @@ Commit message
         Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
     
      ## gvfs-helper-client.c ##
    -@@
    - #include "pkt-line.h"
    - #include "quote.h"
    - #include "packfile.h"
    -+#include "hex.h"
    - 
    - static struct oidset gh_client__oidset_queued = OIDSET_INIT;
    - static unsigned long gh_client__oidset_count;
     @@ gvfs-helper-client.c: static void gh_client__update_loose_cache(const char *line)
      	if (!skip_prefix(line, "loose ", &v1_oid))
      		BUG("update_loose_cache: invalid line '%s'", line);
  • 99: 947ff9f ! 88: fbae44b gvfs-helper: V2 robust retry and throttling

    @@ gvfs-helper.c
      //            Interactive verb: get
      //
      //                 Fetch 1 or more objects.  If a cache-server is configured,
    +@@
    + #include "credential.h"
    + #include "oid-array.h"
    + #include "send-pack.h"
    ++#include "path.h"
    + #include "protocol.h"
    + #include "quote.h"
    + #include "transport.h"
     @@ gvfs-helper.c: static const char *const server_usage[] = {
      	NULL
      };
    @@ gvfs-helper.c: enum gh__error_code {
     @@ gvfs-helper.c: static struct gh__cmd_opts {
      
      	int depth;
    - 	int block_size;
    + 	unsigned int block_size;
     +	int max_retries;
     +	int max_transient_backoff_sec;
      
    @@ gvfs-helper.c: static void select_odb(void)
     +	strbuf_addbuf(&buf, &basename);
     +	strbuf_addstr(&buf, ".temp");
     +
    -+	scld = safe_create_leading_directories(buf.buf);
    ++	scld = safe_create_leading_directories(the_repository, buf.buf);
     +	if (scld != SCLD_OK && scld != SCLD_EXISTS) {
     +		strbuf_addf(&status->error_message,
     +			    "could not create directory for packfile: '%s'",
    @@ gvfs-helper.c: static void do_fetch_oidset(struct gh__response_status *status,
      			/*
      			 * Because the oidset iterator has random
     @@ gvfs-helper.c: static enum gh__error_code do_sub_cmd__get(int argc, const char **argv)
    - 			      N_("number of objects to request at a time")),
    + 			     N_("number of objects to request at a time")),
      		OPT_INTEGER('d', "depth", &gh__cmd_opts.depth,
      			    N_("Commit depth")),
     +		OPT_INTEGER('r', "max-retries", &gh__cmd_opts.max_retries,
    @@ gvfs-helper.c: static enum gh__error_code do_server_subprocess_get(void)
      
      	/*
     @@ gvfs-helper.c: static enum gh__error_code do_sub_cmd__server(int argc, const char **argv)
    - 			      N_("number of objects to request at a time")),
    + 			     N_("number of objects to request at a time")),
      		OPT_INTEGER('d', "depth", &gh__cmd_opts.depth,
      			    N_("Commit depth")),
     +		OPT_INTEGER('r', "max-retries", &gh__cmd_opts.max_retries,
  • 100: 25482f0 ! 89: dccef60 gvfs-helper: expose gvfs/objects GET and POST semantics

    @@ gvfs-helper.c: static void create_tempfile_for_packfile(
     +	strbuf_addch(buf_path, '/');
     +	strbuf_addstr(buf_path, hex+2);
     +
    -+	scld = safe_create_leading_directories(buf_path->buf);
    ++	scld = safe_create_leading_directories(the_repository, buf_path->buf);
     +	if (scld != SCLD_OK && scld != SCLD_EXISTS)
     +		return -1;
     +
    @@ gvfs-helper.c: static enum gh__error_code do_sub_cmd__config(int argc UNUSED, co
     +static enum gh__error_code do_sub_cmd__post(int argc, const char **argv)
     +{
     +	static struct option post_options[] = {
    - 		OPT_MAGNITUDE('b', "block-size", &gh__cmd_opts.block_size,
    - 			      N_("number of objects to request at a time")),
    + 		OPT_UNSIGNED('b', "block-size", &gh__cmd_opts.block_size,
    + 			     N_("number of objects to request at a time")),
      		OPT_INTEGER('d', "depth", &gh__cmd_opts.depth,
     @@ gvfs-helper.c: static enum gh__error_code do_sub_cmd__get(int argc, const char **argv)
      	unsigned long nr_oid_total;
  • 101: dcec60f = 90: 8a6ea92 gvfs-helper: dramatically reduce progress noise

  • 103: 192dc5f = 91: 8e98c11 gvfs-helper: handle pack-file after single POST request

  • 104: 0c7e59f = 92: bc9ec98 test-gvfs-prococol, t5799: tests for gvfs-helper

  • 105: 9f748c9 = 93: 3a7e0f3 gvfs-helper: move result-list construction into install functions

  • 106: c4b74af = 94: b4a2b77 t5799: add support for POST to return either a loose object or packfile

  • 107: 4a23a7b = 95: 8d0eb78 t5799: cleanup wc-l and grep-c lines

  • 108: 0531b1d ! 96: 123a087 gvfs-helper: verify loose objects after write

    @@ gvfs-helper.c: static void install_packfile(struct gh__request_params *params,
     +	enum object_type type;
     +	void *contents = NULL;
     +	unsigned long size;
    -+	struct strbuf type_name = STRBUF_INIT;
     +	int ret;
     +	struct object_info oi = OBJECT_INFO_INIT;
    -+	struct object_id real_oid = *null_oid();
    ++	struct object_id real_oid = *null_oid(the_hash_algo);
     +	oi.typep = &type;
     +	oi.sizep = &size;
    -+	oi.type_name = &type_name;
     +
     +	ret = read_loose_object(path, expected_oid, &real_oid, &contents, &oi);
     +	free(contents);
    -+	strbuf_release(&type_name);
     +
     +	return ret;
     +}
  • 109: 4ded459 = 97: eb34b6c t7599: create corrupt blob test

  • 110: 4f2befe ! 98: 7953506 gvfs-helper: add prefetch support

    @@ gvfs-helper.c: static void create_tempfile_for_packfile(
     +	len_tp = buf.len;
     +	strbuf_addf(  &buf, "%s.%s", basename.buf, suffix1);
      
    - 	scld = safe_create_leading_directories(buf.buf);
    + 	scld = safe_create_leading_directories(the_repository, buf.buf);
      	if (scld != SCLD_OK && scld != SCLD_EXISTS) {
      		strbuf_addf(&status->error_message,
     -			    "could not create directory for packfile: '%s'",
  • 111: f80260c = 99: 8072d6a gvfs-helper: add prefetch .keep file for last packfile

  • 112: 8aa042a = 100: 8c89a27 gvfs-helper: do one read in my_copy_fd_len_tail()

  • 113: 45bc097 = 101: 7f95692 gvfs-helper: move content-type warning for prefetch packs

  • 114: 94679e4 = 102: 4a29c10 fetch: use gvfs-helper prefetch under config

  • 115: 6135cd1 = 103: 046472a gvfs-helper: better support for concurrent packfile fetches

  • 116: 7aef40e = 104: 2d7f2f6 remote-curl: do not call fetch-pack when using gvfs-helper

  • 117: 0710580 = 105: 76d5ebf fetch: reprepare packs before checking connectivity

  • 118: 9f9f0f5 = 106: e36ab0c gvfs-helper: retry when creating temp files

  • 119: a5d28bf = 107: 6a69e21 sparse: avoid warnings about known cURL issues in gvfs-helper.c

  • 120: a11cc96 = 108: 83f5cc1 gvfs-helper: add --max-retries to prefetch verb

  • 121: 4e2b228 = 109: 0dad7c5 t5799: add tests to detect corrupt pack/idx files in prefetch

  • 127: 891edf9 ! 110: 5d45e7d maintenance: care about gvfs.sharedCache config

    @@ builtin/gc.c: static int write_loose_object_to_stdin(const struct object_id *oid
      					   NULL, NULL, NULL))
      		return 0;
     @@ builtin/gc.c: static int pack_loose(struct maintenance_run_opts *opts)
    - 	strvec_push(&pack_proc.args, "pack-objects");
    - 	if (opts->quiet)
      		strvec_push(&pack_proc.args, "--quiet");
    + 	else
    + 		strvec_push(&pack_proc.args, "--no-quiet");
     -	strvec_pushf(&pack_proc.args, "%s/pack/loose", r->objects->odb->path);
     +	strvec_pushf(&pack_proc.args, "%s/pack/loose", object_dir);
      
      	pack_proc.in = -1;
      
     @@ builtin/gc.c: static int pack_loose(struct maintenance_run_opts *opts)
    - 	data.count = 0;
    - 	data.batch_size = 50000;
    + 	else if (data.batch_size > 0)
    + 		data.batch_size--; /* Decrease for equality on limit. */
      
     -	for_each_loose_file_in_objdir(r->objects->odb->path,
     +	for_each_loose_file_in_objdir(object_dir,
  • 128: 379878d = 111: 128e08b unpack-trees:virtualfilesystem: Improve efficiency of clear_ce_flags

  • 129: e11a754 = 112: 08cef42 homebrew: add GitHub workflow to release Cask

  • 130: 747ead2 = 113: 2ffb336 Adding winget workflows

  • 131: f65f327 ! 114: ce137e2 Disable the monitor-components workflow in msft-git

    @@ .github/workflows/monitor-components.yml (deleted)
     -            feed: https://github.com/petervanderdoes/gitflow-avh/tags.atom
     -          - label: curl
     -            feed: https://github.com/curl/curl/tags.atom
    +-            title-pattern: ^(?!rc-)
     -          - label: libgpg-error
     -            feed: https://github.com/gpg/libgpg-error/releases.atom
     -            title-pattern: ^libgpg-error-[0-9\.]*$
    @@ .github/workflows/monitor-components.yml (deleted)
     -            title-pattern: ^libgcrypt-[0-9\.]*$
     -          - label: gpg
     -            feed: https://github.com/gpg/gnupg/releases.atom
    +-            # As per https://gnupg.org/download/index.html#sec-1-1, the stable
    +-            # versions are the one with an even minor version number.
    +-            title-pattern: ^gnupg-\d+\.\d*[02468]\.
     -          - label: mintty
     -            feed: https://github.com/mintty/mintty/releases.atom
     -          - label: 7-zip
  • 132: 58e652e = 115: 223a8ba .github: enable windows builds on microsoft fork

  • 133: b090149 = 116: 0ef1322 .github/actions/akv-secret: add action to get secrets

  • 134: dc95698 = 117: e9b8a43 release: create initial Windows installer build workflow

  • 135: cf0fc79 = 118: 17a91b7 release: create initial Windows installer build workflow

  • 136: b04e8b5 = 119: 88a4616 help: special-case HOST_CPU universal

  • 137: d8e02dd = 120: 2180612 release: add Mac OSX installer build

  • 138: b53d6aa = 121: a8adb19 release: build unsigned Ubuntu .deb package

  • 139: a29b3c5 = 122: b0df148 release: add signing step for .deb package

  • 142: 8cff8f6 = 123: 5e72bca update-microsoft-git: create barebones builtin

  • 140: 5bd8828 = 124: 96e1c99 release: create draft GitHub release with packages & installers

  • 144: 869bf03 = 125: 137a64c update-microsoft-git: Windows implementation

  • 122: b037c1e = 126: 9ff3bdb gvfs-helper: ignore .idx files in prefetch multi-part responses

  • 141: 2f10b77 = 127: 9650baa build-git-installers: publish gpg public key

  • 146: e894cef = 128: 66f158c update-microsoft-git: use brew on macOS

  • 123: 1e6e424 = 129: d9532ce t5799: explicitly test gvfs-helper --fallback and --no-fallback

  • 143: a549c3f = 130: 513e6ec release: continue pestering until user upgrades

  • 87: 70cb6f7 = 131: 0bb8f8f git_config_set_multivar_in_file_gently(): add a lock timeout

  • 148: e26fd6b ! 132: 2b5995f .github: update ISSUE_TEMPLATE.md for microsoft/git

    @@ Metadata
     Author: Derrick Stolee <stolee@gmail.com>
     
      ## Commit message ##
    -    .github: update ISSUE_TEMPLATE.md for microsoft/git
    +    .github: reinstate ISSUE_TEMPLATE.md for microsoft/git
     
    -    We have been using the default issue template from git-for-windows/git,
    +    We had been using the default issue template from git-for-windows/git,
         but we should ask different questions than Git for Windows. Update the
         issue template to ask these helpful questions.
     
         Signed-off-by: Derrick Stolee <derrickstolee@github.com>
     
    - ## .github/ISSUE_TEMPLATE.md ##
    + ## .github/ISSUE_TEMPLATE.md (new) ##
     @@
    -- - [ ] I was not able to find an [open](https://github.com/git-for-windows/git/issues?q=is%3Aopen) or [closed](https://github.com/git-for-windows/git/issues?q=is%3Aclosed) issue matching what I'm seeing
     + - [ ] I was not able to find an [open](https://github.com/microsoft/git/issues?q=is%3Aopen)
     +        or [closed](https://github.com/microsoft/git/issues?q=is%3Aclosed) issue matching
     +        what I'm seeing, including in [the `git-for-windows/git` tracker](https://github.com/git-for-windows/git/issues).
    - 
    - ### Setup
    - 
    -- - Which version of Git for Windows are you using? Is it 32-bit or 64-bit?
    ++
    ++### Setup
    ++
     + - Which version of `microsoft/git` are you using? Is it 32-bit or 64-bit?
    - 
    - ```
    - $ git --version --build-options
    -@@ .github/ISSUE_TEMPLATE.md: $ git --version --build-options
    - ** insert your machine's response here **
    - ```
    - 
    -- - Which version of Windows are you running? Vista, 7, 8, 10? Is it 32-bit or 64-bit?
    ++
    ++```
    ++$ git --version --build-options
    ++
    ++** insert your machine's response here **
    ++```
    ++
     +Are you using Scalar or VFS for Git?
     +
     +** insert your answer here **
     +
     +If VFS for Git, then what version?
    - 
    - ```
    --$ cmd.exe /c ver
    ++
    ++```
     +$ gvfs version
    - 
    - ** insert your machine's response here **
    - ```
    - 
    -- - What options did you set as part of the installation? Or did you choose the
    --   defaults?
    ++
    ++** insert your machine's response here **
    ++```
    ++
     + - Which version of Windows are you running? Vista, 7, 8, 10? Is it 32-bit or 64-bit?
    - 
    - ```
    --# One of the following:
    --> type "C:\Program Files\Git\etc\install-options.txt"
    --> type "C:\Program Files (x86)\Git\etc\install-options.txt"
    --> type "%USERPROFILE%\AppData\Local\Programs\Git\etc\install-options.txt"
    --> type "$env:USERPROFILE\AppData\Local\Programs\Git\etc\install-options.txt"
    --$ cat /etc/install-options.txt
    ++
    ++```
     +$ cmd.exe /c ver
    - 
    - ** insert your machine's response here **
    - ```
    -@@ .github/ISSUE_TEMPLATE.md: $ cat /etc/install-options.txt
    - 
    - ** insert here **
    - 
    -- - If the problem was occurring with a specific repository, can you provide the
    --   URL to that repository to help us with testing?
    ++
    ++** insert your machine's response here **
    ++```
    ++
    ++ - Any other interesting things about your environment that might be related
    ++   to the issue you're seeing?
    ++
    ++** insert your response here **
    ++
    ++### Details
    ++
    ++ - Which terminal/shell are you running Git from? e.g Bash/CMD/PowerShell/other
    ++
    ++** insert your response here **
    ++
    ++ - What commands did you run to trigger this issue? If you can provide a
    ++   [Minimal, Complete, and Verifiable example](http://stackoverflow.com/help/mcve)
    ++   this will help us understand the issue.
    ++
    ++```
    ++** insert your commands here **
    ++```
    ++ - What did you expect to occur after running these commands?
    ++
    ++** insert here **
    ++
    ++ - What actually happened instead?
    ++
    ++** insert here **
    ++
     + - If the problem was occurring with a specific repository, can you specify
     +   the repository?
    - 
    --** insert URL here **
    ++
     +   * [ ] Public repo: **insert URL here**
     +   * [ ] Windows monorepo
     +   * [ ] Office monorepo
     +   * [ ] Other Microsoft-internal repo: **insert name here**
     +   * [ ] Other internal repo.
    +
    + ## .github/ISSUE_TEMPLATE/bug-report.yml (deleted) ##
    +@@
    +-name: Bug report
    +-description: Use this template to report bugs.
    +-body:
    +-  - type: checkboxes
    +-    id: search
    +-    attributes:
    +-      label: Existing issues matching what you're seeing
    +-      description: Please search for [open](https://github.com/git-for-windows/git/issues?q=is%3Aopen) or [closed](https://github.com/git-for-windows/git/issues?q=is%3Aclosed) issue matching what you're seeing before submitting a new issue.
    +-      options:
    +-        - label: I was not able to find an open or closed issue matching what I'm seeing
    +-  - type: textarea
    +-    id: git-for-windows-version
    +-    attributes:
    +-      label: Git for Windows version
    +-      description: Which version of Git for Windows are you using?
    +-      placeholder: Please insert the output of `git --version --build-options` here
    +-      render: shell
    +-    validations:
    +-      required: true
    +-  - type: dropdown
    +-    id: windows-version
    +-    attributes:
    +-      label: Windows version
    +-      description: Which version of Windows are you running?
    +-      options:
    +-        - Windows 8.1
    +-        - Windows 10
    +-        - Windows 11
    +-        - Other
    +-      default: 2
    +-    validations:
    +-      required: true
    +-  - type: dropdown
    +-    id: windows-arch
    +-    attributes:
    +-      label: Windows CPU architecture
    +-      description: What CPU Archtitecture does your Windows target?
    +-      options:
    +-        - i686 (32-bit)
    +-        - x86_64 (64-bit)
    +-        - ARM64
    +-      default: 1
    +-    validations:
    +-      required: true
    +-  - type: textarea
    +-    id: windows-version-cmd
    +-    attributes:
    +-      label: Additional Windows version information
    +-      description: This provides us with further information about your Windows such as the build number
    +-      placeholder: Please insert the output of `cmd.exe /c ver` here
    +-      render: shell
    +-  - type: textarea
    +-    id: options
    +-    attributes:
    +-      label: Options set during installation
    +-      description: What options did you set as part of the installation? Or did you choose the defaults?
    +-      placeholder: |
    +-        One of the following:
    +-        > type "C:\Program Files\Git\etc\install-options.txt"
    +-        > type "C:\Program Files (x86)\Git\etc\install-options.txt"
    +-        > type "%USERPROFILE%\AppData\Local\Programs\Git\etc\install-options.txt"
    +-        > type "$env:USERPROFILE\AppData\Local\Programs\Git\etc\install-options.txt"
    +-        $ cat /etc/install-options.txt
    +-      render: shell
    +-    validations:
    +-      required: true
    +-  - type: textarea
    +-    id: other-things
    +-    attributes:
    +-      label: Other interesting things
    +-      description: Any other interesting things about your environment that might be related to the issue you're seeing?
    +-  - type: input
    +-    id: terminal
    +-    attributes:
    +-      label: Terminal/shell
    +-      description: Which terminal/shell are you running Git from? e.g Bash/CMD/PowerShell/other
    +-    validations:
    +-      required: true
    +-  - type: textarea
    +-    id: commands
    +-    attributes:
    +-      label: Commands that trigger the issue
    +-      description: What commands did you run to trigger this issue? If you can provide a [Minimal, Complete, and Verifiable example](http://stackoverflow.com/help/mcve) this will help us understand the issue.
    +-      render: shell
    +-    validations:
    +-      required: true
    +-  - type: textarea
    +-    id: expected-behaviour
    +-    attributes:
    +-      label: Expected behaviour
    +-      description: What did you expect to occur after running these commands?
    +-    validations:
    +-      required: true
    +-  - type: textarea
    +-    id: actual-behaviour
    +-    attributes:
    +-      label: Actual behaviour
    +-      description: What actually happened instead?
    +-    validations:
    +-      required: true
    +-  - type: textarea
    +-    id: repository
    +-    attributes:
    +-      label: Repository
    +-      description: If the problem was occurring with a specific repository, can you provide the URL to that repository to help us with testing?
    + \ No newline at end of file
    +
    + ## .github/ISSUE_TEMPLATE/config.yml (deleted) ##
    +@@
    +-blank_issues_enabled: false
    + \ No newline at end of file
  • 124: 3653bff ! 133: 1a16b20 gvfs-helper: don't fallback with new config

    @@ Documentation/config/gvfs.adoc: gvfs.cache-server::
     
      ## gvfs-helper-client.c ##
     @@
    - #include "quote.h"
    - #include "packfile.h"
    - #include "hex.h"
    + #define USE_THE_REPOSITORY_VARIABLE
    + #include "git-compat-util.h"
     +#include "config.h"
    - 
    - static struct oidset gh_client__oidset_queued = OIDSET_INIT;
    - static unsigned long gh_client__oidset_count;
    + #include "dir.h"
    + #include "environment.h"
    + #include "gvfs-helper-client.h"
     @@ gvfs-helper-client.c: static struct gh_server__process *gh_client__find_long_running_process(
      	struct gh_server__process *entry;
      	struct strvec argv = STRVEC_INIT;
  • 145: bf9419b = 134: 646d24b dist: archive HEAD instead of HEAD^{tree}

  • 88: 4c721a0 ! 135: 3485bb3 scalar: set the config write-lock timeout to 150ms

    @@ scalar.c: static int set_recommended_config(int reconfigure)
      	};
      	int i;
     @@ scalar.c: static int set_recommended_config(int reconfigure)
    - 
    +  */
      static int toggle_maintenance(int enable)
      {
     +	unsigned long ul;
  • 150: faf44c1 = 136: 62f8bc6 .github: update PULL_REQUEST_TEMPLATE.md

  • 125: b5c3625 = 137: 0091209 test-gvfs-protocol: add cache_http_503 to mayhem

  • 147: 599f716 = 138: 851e267 release: include GIT_BUILT_FROM_COMMIT in MacOS build

  • 89: 118caea = 139: 9840ecc scalar: add docs from microsoft/scalar

  • 151: 04a260a ! 140: 8cf6560 Adjust README.md for microsoft/git

    @@ README.md
     -issues](https://github.com/git-for-windows/git/issues), discuss them in Git
     -for Windows' [Discussions](https://github.com/git-for-windows/git/discussions)
     -or on the [Git mailing list](mailto:git@vger.kernel.org), and [contribute bug
    --fixes](https://github.com/git-for-windows/git/wiki/How-to-participate).
    +-fixes](https://gitforwindows.org/how-to-participate).
     -
     -To build Git for Windows, please either install [Git for Windows'
     -SDK](https://gitforwindows.org/#download-sdk), start its `git-bash.exe`, `cd`
  • 126: b33fcd8 = 141: 7ab7cde t5799: add unit tests for new gvfs.fallback config setting

  • 149: 71b1133 = 142: 9dd3687 release: add installer validation

  • 90: e52657d = 143: 87a7479 scalar (Windows): use forward slashes as directory separators

  • 91: 45ecc4a = 144: 86dbde9 scalar: add retry logic to run_git()

  • 92: c2ea7fb = 145: b2f4162 scalar: support the config command for backwards compatibility

  • 152: bff228b ! 146: 6af8d0f scalar: implement a minimal JSON parser

    @@ json-parser.c (new)
     +{
     +	const char *begin = it->p;
     +
    -+	if (*(it->p)++ != '"')
    -+		return error("expected double quote: '%.*s'", 5, begin),
    -+			reset_iterator(it);
    ++	if (*(it->p)++ != '"') {
    ++		error("expected double quote: '%.*s'", 5, begin);
    ++		return reset_iterator(it);
    ++	}
     +
     +	strbuf_reset(&it->string_value);
     +#define APPEND(c) strbuf_addch(out, c)
     +	while (*it->p != '"') {
     +		switch (*it->p) {
     +		case '\0':
    -+			return error("incomplete string: '%s'", begin),
    -+				reset_iterator(it);
    ++			error("incomplete string: '%s'", begin);
    ++			return reset_iterator(it);
     +		case '\\':
     +			it->p++;
     +			if (*it->p == '\\' || *it->p == '"')
    @@ json-parser.c (new)
     +				unsigned char binary[2];
     +				int i;
     +
    -+				if (hex_to_bytes(binary, it->p + 1, 2) < 0)
    -+					return error("invalid: '%.*s'",
    -+						     6, it->p - 1),
    -+						reset_iterator(it);
    ++				if (hex_to_bytes(binary, it->p + 1, 2) < 0) {
    ++					error("invalid: '%.*s'", 6, it->p - 1);
    ++					return reset_iterator(it);
    ++				}
     +				it->p += 4;
     +
     +				i = (binary[0] << 8) | binary[1];
    @@ json-parser.c (new)
     +
     +	switch (*it->p) {
     +	case '\0':
    -+		return reset_iterator(it), 0;
    ++		reset_iterator(it);
    ++		return 0;
     +	case 'n':
    -+		if (!starts_with(it->p, "null"))
    -+			return error("unexpected value: %.*s", 4, it->p),
    -+				reset_iterator(it);
    ++		if (!starts_with(it->p, "null")) {
    ++			error("unexpected value: %.*s", 4, it->p);
    ++			return reset_iterator(it);
    ++		}
     +		it->type = JSON_NULL;
     +		it->end = it->p = it->begin + 4;
     +		break;
     +	case 't':
    -+		if (!starts_with(it->p, "true"))
    -+			return error("unexpected value: %.*s", 4, it->p),
    -+				reset_iterator(it);
    ++		if (!starts_with(it->p, "true")) {
    ++			error("unexpected value: %.*s", 4, it->p);
    ++			return reset_iterator(it);
    ++		}
     +		it->type = JSON_TRUE;
     +		it->end = it->p = it->begin + 4;
     +		break;
     +	case 'f':
    -+		if (!starts_with(it->p, "false"))
    -+			return error("unexpected value: %.*s", 5, it->p),
    -+				reset_iterator(it);
    ++		if (!starts_with(it->p, "false")) {
    ++			error("unexpected value: %.*s", 5, it->p);
    ++			return reset_iterator(it);
    ++		}
     +		it->type = JSON_FALSE;
     +		it->end = it->p = it->begin + 5;
     +		break;
    @@ json-parser.c (new)
     +		for (it->p++, skip_whitespace(it); *it->p != ']'; i++) {
     +			strbuf_addf(&it->key, "[%d]", i);
     +
    -+			if ((res = iterate_json(it)))
    -+				return reset_iterator(it), res;
    ++			if ((res = iterate_json(it))) {
    ++				reset_iterator(it);
    ++				return res;
    ++			}
     +			strbuf_setlen(&it->key, key_offset);
     +
     +			skip_whitespace(it);
    @@ json-parser.c (new)
     +			if (parse_json_string(it, &it->key) < 0)
     +				return -1;
     +			skip_whitespace(it);
    -+			if (*(it->p)++ != ':')
    -+				return error("expected colon: %.*s", 5, it->p),
    -+					reset_iterator(it);
    ++			if (*(it->p)++ != ':') {
    ++				error("expected colon: %.*s", 5, it->p);
    ++				return reset_iterator(it);
    ++			}
     +
     +			if ((res = iterate_json(it)))
     +				return res;
  • 153: c85252b ! 147: 2942d9e scalar clone: support GVFS-enabled remote repositories

    @@ scalar.c: static int set_config(const char *fmt, ...)
     @@ scalar.c: static int cmd_clone(int argc, const char **argv)
      	char *branch_to_free = NULL;
      	int full_clone = 0, single_branch = 0, show_progress = isatty(2);
    - 	int src = 1, tags = 1;
    + 	int src = 1, tags = 1, maintenance = 1;
     +	const char *cache_server_url = NULL;
     +	char *default_cache_server_url = NULL;
      	struct option clone_options[] = {
      		OPT_STRING('b', "branch", &branch, N_("<branch>"),
      			   N_("branch to checkout after clone")),
     @@ scalar.c: static int cmd_clone(int argc, const char **argv)
    - 			 N_("create repository within 'src' directory")),
    - 		OPT_BOOL(0, "tags", &tags,
      			 N_("specify if tags should be fetched during clone")),
    + 		OPT_BOOL(0, "maintenance", &maintenance,
    + 			 N_("specify if background maintenance should be enabled")),
     +		OPT_STRING(0, "cache-server-url", &cache_server_url,
     +			   N_("<url>"),
     +			   N_("the url or friendly name of the cache server")),
  • 154: 97b19f8 = 148: 45267f7 test-gvfs-protocol: also serve smart protocol

  • 155: 2b03f40 = 149: ada7a2e gvfs-helper: add the endpoint command

  • 156: 72bb021 = 150: a879721 dir_inside_of(): handle directory separators correctly

  • 157: a4ef597 = 151: f4fa366 scalar: disable authentication in unattended mode

  • 158: 0528ea5 ! 152: 68ff7d3 scalar: do initialize gvfs.sharedCache

    @@ Documentation/scalar.adoc: SYNOPSIS
      --------
      [verse]
      scalar clone [--single-branch] [--branch <main-branch>] [--full-clone]
    --	[--[no-]src] <url> [<enlistment>]
    +-	[--[no-]src] [--[no-]tags] [--[no-]maintenance] <url> [<enlistment>]
    ++	[--[no-]src] [--[no-]tags] [--[no-]maintenance]
     +	[--[no-]src] [--local-cache-path <path>] [--cache-server-url <url>]
     +	<url> [<enlistment>]
      scalar list
    - scalar register [<enlistment>]
    + scalar register [--[no-]maintenance] [<enlistment>]
      scalar unregister [<enlistment>]
     @@ Documentation/scalar.adoc: cloning. If the HEAD at the remote did not point at any branch when
    - 	A sparse-checkout is initialized by default. This behavior can be
    - 	turned off via `--full-clone`.
    + 	background maintenance feature. Use the `--no-maintenance` to skip
    + 	this configuration.
      
     +--local-cache-path <path>::
    -+    Override the path to the local cache root directory; Pre-fetched objects
    -+    are stored into a repository-dependent subdirectory of that path.
    ++	Override the path to the local cache root directory; Pre-fetched objects
    ++	are stored into a repository-dependent subdirectory of that path.
     ++
     +The default is `<drive>:\.scalarCache` on Windows (on the same drive as the
     +clone), and `~/.scalarCache` on macOS.
     +
     +--cache-server-url <url>::
    -+    Retrieve missing objects from the specified remote, which is expected to
    -+    understand the GVFS protocol.
    ++	Retrieve missing objects from the specified remote, which is expected to
    ++	understand the GVFS protocol.
     +
      List
      ~~~~
    @@ scalar.c: void load_builtin_commands(const char *prefix UNUSED,
     +	}
     +
     +	strbuf_addf(&buf, "%s/pack", shared_cache_path);
    -+	switch (safe_create_leading_directories(buf.buf)) {
    ++	switch (safe_create_leading_directories(the_repository, buf.buf)) {
     +	case SCLD_OK: case SCLD_EXISTS:
     +		break; /* okay */
     +	default:
    @@ scalar.c: void load_builtin_commands(const char *prefix UNUSED,
      	const char *branch = NULL;
      	char *branch_to_free = NULL;
      	int full_clone = 0, single_branch = 0, show_progress = isatty(2);
    - 	int src = 1, tags = 1;
    + 	int src = 1, tags = 1, maintenance = 1;
     -	const char *cache_server_url = NULL;
     -	char *default_cache_server_url = NULL;
     +	const char *cache_server_url = NULL, *local_cache_root = NULL;
  • 159: c711d85 = 153: 2dca24a scalar diagnose: include shared cache info

  • 160: 62a4948 = 154: 2a84b25 scalar: only try GVFS protocol on https:// URLs

  • 161: 2eb9d69 = 155: af5b070 scalar: verify that we can use a GVFS-enabled repository

  • 162: 46afbd6 ! 156: d80494b scalar: add the cache-server command

    @@ Commit message
     
      ## Documentation/scalar.adoc ##
     @@ Documentation/scalar.adoc: scalar run ( all | config | commit-graph | fetch | loose-objects | pack-files )
    - scalar reconfigure [ --all | <enlistment> ]
    + scalar reconfigure [--maintenance=(enable|disable|keep)] [ --all | <enlistment> ]
      scalar diagnose [<enlistment>]
      scalar delete <enlistment>
     +scalar cache-server ( --get | --set <url> | --list [<remote>] ) [<enlistment>]
  • 163: c0755e8 = 157: a2db8df scalar: add a test toggle to skip accessing the vsts/info endpoint

  • 164: f51327a = 158: a64baf6 scalar: adjust documentation to the microsoft/git fork

  • 165: 0202b52 = 159: 3b5fa50 scalar: enable untracked cache unconditionally

  • 166: 4562c36 = 160: b664c83 scalar: parse clone --no-fetch-commits-and-trees for backwards compatibility

  • 175: 6b2e1df ! 161: 11ca338 add/rm: allow adding sparse entries when virtual

    @@ builtin/add.c: int cmd_add(int argc,
     
      ## builtin/rm.c ##
     @@
    - #define DISABLE_SIGN_COMPARE_WARNINGS
    + #define USE_THE_REPOSITORY_VARIABLE
      
      #include "builtin.h"
     +#include "environment.h"
    @@ builtin/rm.c
      #include "config.h"
      #include "lockfile.h"
     @@ builtin/rm.c: int cmd_rm(int argc,
    - 	for (i = 0; i < the_repository->index->cache_nr; i++) {
    + 	for (unsigned int i = 0; i < the_repository->index->cache_nr; i++) {
      		const struct cache_entry *ce = the_repository->index->cache[i];
      
     -		if (!include_sparse &&
  • 167: 5360246 ! 162: 638b31b scalar: make GVFS Protocol a forced choice

    @@ Commit message
     
      ## Documentation/scalar.adoc ##
     @@ Documentation/scalar.adoc: clone), and `~/.scalarCache` on macOS.
    -     Retrieve missing objects from the specified remote, which is expected to
    -     understand the GVFS protocol.
    + 	Retrieve missing objects from the specified remote, which is expected to
    + 	understand the GVFS protocol.
      
     +--[no-]gvfs-protocol::
     +	When cloning from a `<url>` with either `dev.azure.com` or
    @@ Documentation/scalar.adoc: clone), and `~/.scalarCache` on macOS.
     
      ## scalar.c ##
     @@ scalar.c: static int cmd_clone(int argc, const char **argv)
    - 	int src = 1, tags = 1;
    + 	int src = 1, tags = 1, maintenance = 1;
      	const char *cache_server_url = NULL, *local_cache_root = NULL;
      	char *default_cache_server_url = NULL, *local_cache_root_abs = NULL;
     +	int gvfs_protocol = -1;
    @@ scalar.c: static int cmd_clone(int argc, const char **argv)
      		OPT_STRING('b', "branch", &branch, N_("<branch>"),
      			   N_("branch to checkout after clone")),
     @@ scalar.c: static int cmd_clone(int argc, const char **argv)
    - 			 N_("create repository within 'src' directory")),
    - 		OPT_BOOL(0, "tags", &tags,
      			 N_("specify if tags should be fetched during clone")),
    + 		OPT_BOOL(0, "maintenance", &maintenance,
    + 			 N_("specify if background maintenance should be enabled")),
     +		OPT_BOOL(0, "gvfs-protocol", &gvfs_protocol,
     +			 N_("force enable (or disable) the GVFS Protocol")),
      		OPT_STRING(0, "cache-server-url", &cache_server_url,
  • 176: d919079 = 163: 4239977 sparse-checkout: add config to disable deleting dirs

  • 168: e7b9ddf = 164: d1068bd scalar: work around GVFS Protocol HTTP/2 failures

  • 177: d8c17fe = 165: d9293a4 diff: ignore sparse paths in diffstat

  • 169: 4b737cb = 166: e38cf25 scalar diagnose: accommodate Scalar's Functional Tests

  • 178: 3f7e070 = 167: 43dc9ea repo-settings: enable sparse index by default

  • 170: ba889ba ! 168: 15b2ae8 ci: run Scalar's Functional Tests

    @@ .github/workflows/scalar-functional-tests.yml (new)
     +      - name: Setup .NET Core
     +        uses: actions/setup-dotnet@v4
     +        with:
    -+          dotnet-version: '3.1.x'
    ++          dotnet-version: '3.1.426'
     +
     +      - name: Install dependencies
     +        run: dotnet restore
  • 171: 586c593 = 169: 424e378 scalar: upgrade to newest FSMonitor config setting

  • 179: 7dc64ed = 170: 2056527 diff(sparse-index): verify with partially-sparse

  • 173: b95c7d8 = 171: cd079fc abspath: make strip_last_path_component() global

  • 174: 88e7f83 = 172: aeca46e scalar: .scalarCache should live above enlistment

  • 180: 7f3ff15 = 173: 6e81488 stash: expand testing for git stash -u

  • 93: cb1cc43 = 174: 5a158dc sequencer: avoid progress when stderr is redirected

  • 94: b0026da (this is an upstream commit) < -: ------------ ci: skip unavailable external software

  • 102: 070586c (squashed into dfa84b1) < -: ------------ gvfs-helper-client.h: define struct object_id

  • 172: cdbd96b (upstreamed in a very different form as a34fef8) < -: ------------ scalar: configure maintenance during 'reconfigure'

  • 181: 6efbad3 = 175: 0343fe9 sparse: add vfs-specific precautions

  • 182: 69514c8 = 176: 2e7bbe4 reset: fix mixed reset when using virtual filesystem

  • 183: 0f0dbc9 = 177: 4571c2d sparse-index: add ensure_full_index_with_reason()

  • 184: ec16e26 ! 178: 931f6e8 treewide: add reasons for expanding index

    @@ Commit message
         Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
     
      ## builtin/checkout-index.c ##
    -@@ builtin/checkout-index.c: static int checkout_all(const char *prefix, int prefix_length)
    +@@ builtin/checkout-index.c: static int checkout_all(struct index_state *index, const char *prefix, int prefi
      			 * first entry inside the expanded sparse directory).
      			 */
      			if (ignore_skip_worktree) {
    --				ensure_full_index(the_repository->index);
    -+				ensure_full_index_with_reason(the_repository->index,
    -+							      "checkout-index");
    - 				ce = the_repository->index->cache[i];
    +-				ensure_full_index(index);
    ++				ensure_full_index_with_reason(index, "checkout-index");
    + 				ce = index->cache[i];
      			}
      		}
     
    @@ builtin/rm.c: int cmd_rm(int argc,
     +		ensure_full_index_with_reason(the_repository->index,
     +					      "rm pathspec");
      
    - 	for (i = 0; i < the_repository->index->cache_nr; i++) {
    + 	for (unsigned int i = 0; i < the_repository->index->cache_nr; i++) {
      		const struct cache_entry *ce = the_repository->index->cache[i];
     
      ## builtin/sparse-checkout.c ##
    @@ repository.c: int repo_read_index(struct repository *repo)
      	 * If sparse checkouts are in use, check whether paths with the
     
      ## sequencer.c ##
    -@@ sequencer.c: static int do_recursive_merge(struct repository *r,
    - 		merge_switch_to_result(&o, head_tree, &result, 1, show_output);
    - 		clean = result.clean;
    - 	} else {
    --		ensure_full_index(r->index);
    -+		ensure_full_index_with_reason(r->index, "non-ort merge strategy");
    - 		clean = merge_trees(&o, head_tree, next_tree, base_tree);
    - 		if (is_rebase_i(opts) && clean <= 0)
    - 			fputs(o.obuf.buf, stdout);
     @@ sequencer.c: static int read_and_refresh_cache(struct repository *r,
      	 * expand the sparse index.
      	 */
  • 185: 24f9d69 = 179: 846189d treewide: custom reasons for expanding index

  • 186: 0d9f9a8 ! 180: 0ccfb5a sparse-index: add macro for unaudited expansions

    @@ entry.c: static void mark_colliding_entries(const struct checkout *state,
      		struct cache_entry *dup = state->istate->cache[i];
      
     
    - ## merge-recursive.c ##
    -@@ merge-recursive.c: static struct string_list *get_unmerged(struct index_state *istate)
    - 	string_list_init_dup(unmerged);
    - 
    - 	/* TODO: audit for interaction with sparse-index. */
    --	ensure_full_index(istate);
    -+	ensure_full_index_unaudited(istate);
    - 	for (i = 0; i < istate->cache_nr; i++) {
    - 		struct string_list_item *item;
    - 		struct stage_data *e;
    -
      ## read-cache.c ##
     @@ read-cache.c: int repo_index_has_changes(struct repository *repo,
      		return opt.flags.has_changes != 0;
  • 187: 9316584 = 181: 706a323 Docs: update sparse index plan with logging

  • 188: bd9503a = 182: 6720b36 sparse-index: log failure to clear skip-worktree

  • 189: bd5f482 = 183: 23446d3 stash: use -f in checkout-index child process

  • 190: 114d467 = 184: 552564f sparse-index: do not copy hashtables during expansion

  • 191: aa12433 = 185: 98d1f23 path-walk: add new 'edge_aggressive' option

  • 192: b47436a = 186: 056cdd5 pack-objects: allow --shallow and --path-walk

  • 193: a861b3e = 187: 163de2d t5538: add test to confirm deltas in shallow pushes

  • 194: 6c919c5 = 188: 407974b sub-process: avoid leaking cmd

  • 195: f977bbc = 189: 86feae8 remote-curl: release filter options before re-setting them

  • 196: 7cc8d4a = 190: 538826f transport: release object filter options

  • 197: d0fcae0 (upstream: 03a4e46) < -: ------------ mingw: special-case administrators even more

  • 198: ce75f5b (upstream: 5bb88e8) < -: ------------ test-tool path-utils: support debugging "dubious ownership" issues

  • 199: 6259137 = 191: 98000d7 push: don't reuse deltas with path walk

  • 200: 1b97648 = 192: a46efae t7900-maintenance.sh: reset config between tests

  • 201: 3b62f39 ! 193: a5bf552 maintenance: add cache-local-objects maintenance task

    @@ Documentation/git-maintenance.adoc: task:
      --
      +
      `git maintenance register` will also disable foreground maintenance by
    -@@ Documentation/git-maintenance.adoc: pack-refs::
    - 	need to iterate across many references. See linkgit:git-pack-refs[1]
    - 	for more information.
    +@@ Documentation/git-maintenance.adoc: worktree-prune::
    + 	The `worktree-prune` task deletes stale or broken worktrees. See
    + 	linkgit:git-worktree[1] for more information.
      
     +cache-local-objects::
     +	The `cache-local-objects` task only operates on Scalar or VFS for Git
    @@ builtin/gc.c
     +#include "git-compat-util.h"
      #include "builtin.h"
      #include "abspath.h"
    - #include "date.h"
    -@@
    - #include "hook.h"
    - #include "setup.h"
    - #include "trace2.h"
     +#include "copy.h"
    -+#include "dir.h"
    - 
    - #define FAILED_RUN "failed to run %s"
    - 
    + #include "date.h"
    + #include "dir.h"
    + #include "environment.h"
     @@ builtin/gc.c: static int maintenance_task_incremental_repack(struct maintenance_run_opts *opts
      	return 0;
      }
    @@ builtin/gc.c: static int maintenance_task_incremental_repack(struct maintenance_
      				struct gc_config *cfg);
      
     @@ builtin/gc.c: enum maintenance_task_label {
    - 	TASK_GC,
    - 	TASK_COMMIT_GRAPH,
    - 	TASK_PACK_REFS,
    + 	TASK_REFLOG_EXPIRE,
    + 	TASK_WORKTREE_PRUNE,
    + 	TASK_RERERE_GC,
     +	TASK_CACHE_LOCAL_OBJS,
      
      	/* Leave as final value */
      	TASK__COUNT
     @@ builtin/gc.c: static struct maintenance_task tasks[] = {
    - 		maintenance_task_pack_refs,
    - 		pack_refs_condition,
    + 		maintenance_task_rerere_gc,
    + 		rerere_gc_condition,
      	},
     +	[TASK_CACHE_LOCAL_OBJS] = {
     +		"cache-local-objects",
  • 202: 2447236 = 194: 31569a9 scalar.c: add cache-local-objects task

  • 203: 3dc10cc = 195: e9a3a10 git.c: add VFS enabled cmd blocking

  • 204: 39c6e27 = 196: 08d40ca git.c: permit repack cmd in Scalar repos

  • 205: 7e7ebca = 197: 34661f9 git.c: permit fsck cmd in Scalar repos

  • 206: 4b136a4 = 198: 5fcc608 git.c: permit prune cmd in Scalar repos

  • 209: 865b4f2 = 199: 7c96b1e hooks: add custom post-command hook config

  • 207: f0357cf = 200: 1704aa8 worktree: remove special case GVFS cmd blocking

  • 210: bc2e47c = 201: 7eab4f3 Docs: fix asciidoc failures from short delimiters

  • 208: 26503a2 = 202: 8fca170 builtin/repack.c: emit warning when shared cache is present

  • 211: 3b7462d = 203: ba650c4 hooks: make hook logic memory-leak free

  • 212: b7d2252 (upstream: 832d9f6) < -: ------------ ci: upgrade sparse to supported build agents

  • 213: 19be79a (upstream: da87b58) < -: ------------ sparse: ignore warning from new glibc headers

  • 214: a704627 (upstream: 8a471a6) < -: ------------ ci(pedantic): ensure that awk is installed

  • 215: 0efa9e7 (upstream: 956acbe) < -: ------------ ci(jgit): use a more reliable link to download JGit

  • 216: 8af1a1e (upstream: 89d557b) < -: ------------ test-tool: add pack-deltas helper

  • 217: b823b7a ! 204: 6991013 t5309: create failing test for 'git index-pack'

    @@ Commit message
         Signed-off-by: Derrick Stolee <stolee@gmail.com>
     
      ## t/t5309-pack-delta-cycles.sh ##
    -@@ t/t5309-pack-delta-cycles.sh: test_expect_success 'failover to a duplicate object in the same pack' '
    - 	test_must_fail git index-pack --fix-thin --stdin <recoverable.pack
    +@@ t/t5309-pack-delta-cycles.sh: test_expect_success 'index-pack works with thin pack A->B->C with B on disk' '
    + 	)
      '
      
     +test_expect_failure 'index-pack works with thin pack A->B->C with B on disk' '
  • 218: f83ee41 (upstream: 98f8854) < -: ------------ index-pack: allow revisiting REF_DELTA chains

  • 219: 175a8be (upstream: 6f11c42) < -: ------------ curl: fix integer constant typechecks with curl_easy_setopt()

  • 220: f895bb3 (upstream: 30325e2) < -: ------------ curl: fix integer variable typechecks with curl_easy_setopt()

  • 221: 8ea7a50 (upstream: 4558c8f) < -: ------------ curl: fix symbolic constant typechecks with curl_easy_setopt()

  • 222: 443ded5 (upstream: 229d126) < -: ------------ curl: pass long values where expected

  • 223: 74a38e8 = 205: 6ff804c gvfs-helper: pass long values where expected

  • 224: cb13cb7 (squashed into 15b2ae8) < -: ------------ ci(scalar): work around bug in actions/setup-dotnet

  • 225: 5240313 (upstream: 882efe0) < -: ------------ ci(coverity): fix building on Windows

  • 226: a07724a (upstream: 3cc4fc1) < -: ------------ ci(coverity): output the build log upon error

  • 227: 2287b74 = 206: 2c69809 gvfs-helper-client: clean up server process(es)

This is admittedly a large range-diff...

Part of the reason is that write_object_file_literally() was deleted, as was merge-recursive.c. Using OPTION_CALLBACK was fragile because the option struct changed, and we should have used OPT_CALLBACK_F() instead in the first place. The OPT_MAGNITUDE() macro was superseded by OPT_UNSIGNED(), hence we had to update those parts of our code. 5b3d2d8 needed to be tightened to apply only to VFS for Git clones, but not to Scalar ones (this was clearly the intention, but now there's a regression test for that). bdd8bcb wants to avoid fetching objects while parsing packfiles, and 7862c20 still does that (even if HAS_OBJECT_FETCH_PROMISOR might be enough, I wanted to stay on the safe side). Git for Windows replaced its issue reporting template by a form, but we still want a template (because it is much easier to maintain). Finally, we had to accommodate clang's -Wcomma mode in the json-parser.c source code. The rest is context changes.

@dscho dscho marked this pull request as ready for review June 16, 2025 22:12
Copy link

@derrickstolee derrickstolee left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I took a look at the range-diff. I appreciate all the work you did, @dscho, navigating the upstream refactors. I'm particularly looking forward to seeing this release as Office monorepo users will benefit from the new sparse index integration with git add -p.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.